How to use Apache ActiveMQ Artemis in Kubernetes - kubernetes

I have an issue where I have a workload in Kubernetes which contains an Apache ActiveMQ Artemis broker. The server starts properly when I have a single pod in my workload, the issue starts when I try to scale them. The brokers in the pods can't connect to each other, so I can't scale my work load. My final goal is to make it scalable. I tried it locally with two docker containers and it worked fine.
Here is my broker.xml:
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>Broker1</name>
<broadcast-groups>
<broadcast-group name="brokerCluster-broadcast">
<local-bind-address>0.0.0.0</local-bind-address>
<local-bind-port>10000</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>20</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="brokerCluster-discovery">
<local-bind-port>10000</local-bind-port>
<local-bind-address>0.0.0.0</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="brokerCluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="brokerCluster-discovery"/>
</cluster-connection>
</cluster-connections>
<connectors>
<connector name="netty-connector">tcp://0.0.0.0:61610</connector>
</connectors>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>536000</journal-buffer-timeout>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>536000</page-sync-timeout>
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61610</acceptor>
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<redistribution-delay>0</redistribution-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="TestQueue">
<anycast>
<queue name="testQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
Edit: Attached kubernetes,docker configs
deployment.yml
apiVersion: v1
kind: Service
metadata:
name: artemis
labels:
app: artemis
spec:
ports:
- port: 6161
name: service
protocol: UDP
- port: 8161
name: console
protocol: UDP
- port: 9876
name: broadcast
protocol: UDP
- port: 61610
name: netty-connector
protocol: TCP
- port: 5672
name: acceptor-amqp
protocol: TCP
- port: 61613
name: acceptor-stomp
protocol: TCP
- port: 5445
name: accep-hornetq
protocol: TCP
- port: 1883
name: acceptor-mqt
protocol: TCP
- port: 10000
protocol: UDP
name: brokercluster-broadcast // this name is invalid but i wanted to match it to my broker.xml
clusterIP: None
selector:
app: artemis01
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: artemis01headless
namespace: artemis
spec:
selector:
matchLabels:
app: artemis01
serviceName: artemis01
replicas: 3
template:
metadata:
labels:
app: artemis01
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- worker
containers:
- env:
- name: ARTEMIS_PASSWORD
value: admin
- name: ARTEMIS_USER
value: admin
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
name: artemis
image:
ports:
- containerPort: 6161
name: service
protocol: UDP
- containerPort: 8161
name: console
protocol: UDP
- containerPort: 9876
name: broadcast
protocol: UDP
- containerPort: 61610
name: netty-connector
protocol: TCP
- containerPort: 5672
name: acceptor-amqp
protocol: TCP
- containerPort: 61613
name: acceptor-stomp
protocol: TCP
- containerPort: 5445
name: accep-hornetq
protocol: TCP
- containerPort: 1883
name: acceptor-mqtt
protocol: TCP
- containerPort: 10000
name: brokercluster-broadcast
protocol: UDP
imagePullSecrets:
- name: xxxxxxx
Dockerfile source
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ActiveMQ Artemis
FROM jboss/base-jdk:8
LABEL maintainer="Apache ActiveMQ Team"
# Make sure pipes are considered to determine success, see: https://github.com/hadolint/hadolint/wiki/DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /opt
ENV ARTEMIS_USER artemis
ENV ARTEMIS_PASSWORD artemis
ENV ANONYMOUS_LOGIN false
ENV CREATE_ARGUMENTS --user ${ARTEMIS_USER} --password ${ARTEMIS_PASSWORD} --silent --http-host 0.0.0.0 --relax-jolokia
USER root
# add user and group for artemis
RUN groupadd -g 1001 -r artemis && useradd -r -u 1001 -g artemis artemis \
&& yum install -y libaio && yum -y clean all
USER artemis
ADD . /opt/activemq-artemis
# Web Server
EXPOSE 8161 \
61610 \
9876 \
61613 \
61616 \
5672 \
5445 \
1883 \
10000
USER root
RUN mkdir /var/lib/artemis-instance && chown -R artemis.artemis /var/lib/artemis-instance
COPY ./docker/docker-run.sh /
USER artemis
# Expose some outstanding folders
VOLUME ["/var/lib/artemis-instance"]
WORKDIR /var/lib/artemis-instance
ENTRYPOINT ["/docker-run.sh"]
CMD ["run"]
run.sh
set -e
BROKER_HOME=/var/lib/
CONFIG_PATH=$BROKER_HOME/etc
export BROKER_HOME OVERRIDE_PATH CONFIG_PATH
echo CREATE_ARGUMENTS=${CREATE_ARGUMENTS}
if ! [ -f ./etc/broker.xml ]; then
/opt/activemq-artemis/bin/artemis create ${CREATE_ARGUMENTS} .
#the script copies my broker.xml to /var/lib/artemis-instance/etc/broker.xml here.
sed -i -e 's|$PLACEHOLDERIP|'$MY_POD_IP'|g' /var/lib/artemis-instance/etc/broker.xml
else
echo "broker already created, ignoring creation"
fi
exec ./bin/artemis "$#"

I believe the issue is with your connector configuration. This is what you're using:
<connector name="netty-connector">tcp://0.0.0.0:61610</connector>
The information from this connector gets broadcast to the other cluster members since you've specified it in the <connector-ref> of your <cluster-connection>. The other cluster members then try to use this information to connect back to the node that broadcast it. However, 0.0.0.0 won't make sense to a remote client.
The address 0.0.0.0 is a meta-address. In the context of a listener (e.g. an Artemis acceptor) it means that the listener will listen for connections on all local addresses. In the context of a connector it doesn't really have a meaning. See this article for more about 0.0.0.0.
You should be using a real IP address or hostname that a client can use to actually get a network route to the server.
Also, since you're using UDP multicast (i.e. via the <broadcast-group> and <discovery-group>) please ensure this functions as well between the containers/pods. If you can't get UDP multicast working in your environment (or simply don't want to) you could switch to a static cluster configuration. Refer to the documentation and the "clustered static discovery" example for details on how to configure this.

Related

Spark Operator unable to download dependency jar from password protected nexus repository

Spark Operator/spark submit unable to download dependency jar from password protected nexus repository
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
namespace: default
spec:
type: Scala
mode: cluster
image: "gcr.io/spark-operator/spark:v3.1.1"
imagePullPolicy: Always
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: "local:///opt/spark/examples/my-main.jar"
sparkVersion: "3.1.1"
deps:
jars:
- http://nexus-url/mydep.jar
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: spark
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.1.1
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
I have tried providing deps.jars with user:password like:
http://user:password#nexushosh/mydep.jar
Tried ivy settings:
spec:
conf:
spark.jars.ivySettings: /tmp/ivy/ivysettings.xml
#ivysettings mounted at path with following content
<ivysettings>
<settings defaultResolver="nexus" />
<credentials host="nexus.host.com" realm="Sonatype Nexus Repository Manager" username="username" passwd="mypassword" />
<property name="nexus-public" value="https://nexus.host.com/repository/public" />
<property name="nexus-releases" value="https://nexus.host.com/repository/releases" />
<property name="nexus-snapshots" value="https://nexus.host.com/repository/snapshots" />
<resolvers>
<ibiblio name="nexus" m2compatible="true" root="${nexus-public}" />
<ibiblio name="nexus-snapshots" m2compatible="true" root="${nexus-snapshots}" />
<ibiblio name="nexus-releases" m2compatible="true" root="${nexus-releases}" />
</resolvers>
</ivysettings>
Note: My Nexus repository is of type RAW and I have just uploaded jar to repo
No network block as able to download when repo is public

Cannot connect to Artemis from another pod in kubernetes

I have two pods one is my ArtemisMQ pod and another is a consumer service.
I have ingress set up so that I can access the console which works, but the issue comes down to when my consumer pod is trying to access port 61616 for connections.
The error I get from my consumer pod
Could not refresh JMS Connection for destination 'PricingSave' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Failed to create session factory; nested exception is ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219013: Timed out waiting to receive cluster topology. Group:null]
My broker config:
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>Artemis</name>
<broadcast-groups>
<broadcast-group name="brokerCluster-broadcast">
<local-bind-address>0.0.0.0</local-bind-address>
<local-bind-port>10000</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>20</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="brokerCluster-discovery">
<local-bind-port>10000</local-bind-port>
<local-bind-address>0.0.0.0</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="brokerCluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="brokerCluster-discovery"/>
</cluster-connection>
</cluster-connections>
<connectors>
<connector name="netty-connector">tcp://PODS_IP:61616</connector>
</connectors>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 2.84 writes per millisecond
on the current journal configuration.
That translates as a sync write every 352000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>352000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>484000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://PODS_IP:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://PODS_IP:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://PODS_IP:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://PODS_IP:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://PODS_IP:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>dlq</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="PricingSave.dlq">
<anycast>
<queue name="PricingSave.dlq"/>
</anycast>
</address>
<address name="PricingSave">
<anycast>
<queue name="PricingSave"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
</core>
</configuration>
My Artemis Service config:
kind: Service
apiVersion: v1
metadata:
labels:
name: amq
name: amq
namespace: default
spec:
selector:
app: amq
ports:
- name: web
port: 8161
protocol: TCP
targetPort: 8161
- name: stomp
port: 61613
protocol: TCP
targetPort: 61613
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: openwire
port: 61616
protocol: TCP
targetPort: 61616
- name: jmx
port: 9404
protocol: TCP
targetPort: 9404
- name: hornetq
port: 5445
protocol: TCP
targetPort: 5445
- name: amqp
port: 5672
protocol: TCP
targetPort: 5672
sessionAffinity: None
type: ClusterIP
My Artemis deployment Config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: amq
name: amq
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: amq
strategy:
type: Recreate
template:
metadata:
annotations:
linkerd.io/inject: enabled
labels:
app: amq
spec:
containers:
- image: REPOSITORY/IMAGE
name: amq
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_CORE_LIMIT
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: CONTAINER_MAX_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
ports:
- containerPort: 8161
name: web
protocol: TCP
- containerPort: 61613
name: stomp
protocol: TCP
- containerPort: 1883
name: mqtt
protocol: TCP
- containerPort: 61616
name: openwire
protocol: TCP
- containerPort: 9404
name: jmx
protocol: TCP
- containerPort: 5445
name: hornetq
protocol: TCP
- containerPort: 5672
name: amqp
protocol: TCP
imagePullPolicy: Always
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 400m
memory: 1012Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
restartPolicy: Always
imagePullSecrets:
- name: my-registry...
Using 0.0.0.0 for a connector is not valid. In fact, there should be a WARN level message in the log about it. Your cluster will not form properly with such a configuration. You need to use a hostname or IP address that other hosts can use to reach the broker that broadcasts it. That's the whole point of the broadcast-group - to tell other brokers who may be listening how they can connect to it.
After some more testing locally on minikube which worked, I rebuilt my cluster and redeployed.
I decided not to deploy nor reconfigure linkerd on this new cluster and everything is now working fine.
So the issue I've had must have been done to some proxy settings to do with linkerd. I'll follow this post up when I find a solution to get connections through linkerd.

ActiveMQ running in Kubernetes minikube: how to configure admin password

I am setting up a minikube which contains an activeMQ message queue together with InfluxDB and Grafana.
For Grafana, I was able to set the admin password via the deployment:
containers:
- env:
- name: GF_INSTALL_PLUGINS
value: grafana-piechart-panel, blackmirror1-singlestat-math-panel
- name: GF_SECURITY_ADMIN_USER
value: <grafanaadminusername>
- name: GF_SECURITY_ADMIN_PASSWORD
value: <grafanaadminpassword>
image: grafana/grafana:6.6.0
name: grafana
volumeMounts:
- mountPath: /etc/grafana/provisioning
name: grafana-volume
subPath: provisioning/
- mountPath: /var/lib/grafana/dashboards
name: grafana-volume
subPath: dashboards/
- mountPath: /etc/grafana/grafana.ini
name: grafana-volume
subPath: grafana.ini
readOnly: true
restartPolicy: Always
volumes:
- name: grafana-volume
hostPath:
path: /grafana
For influxdb I set the user/passwd via a secret:
apiVersion: v1
kind: Secret
metadata:
name: influxdb
namespace: default
type: Opaque
stringData:
INFLUXDB_CONFIG_PATH: /etc/influxdb/influxdb.conf
INFLUXDB_ADMIN_USER: <influxdbadminuser>
INFLUXDB_ADMIN_PASSWORD: <influxdbbadminpassword>
INFLUXDB_DB: <mydb>
Currently, my ActiveMQ deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: web
image: rmohr/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
- containerPort: 8161
resources:
limits:
memory: 512Mi
How do I achieve the similar result (password and admin user via config file) for ActiveMQ? Even better if this is achieved via encrypted secret, which I didn't manage yet in case of influxDB and Grafana
I would do this the following way:
Here you have nicely described encrypted passwords in ActiveMQ.
First you need to prepare such encrypted password. ActiveMQ has a built-in utility for that:
As of ActiveMQ 5.4.1 you can encrypt your passwords and safely store
them in configuration files. To encrypt the password, you can use the
newly added encrypt command like:
$ bin/activemq encrypt --password activemq --input mypassword
...
Encrypted text: eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
Where the password you want to encrypt is passed with the input argument, while the password argument is a secret used by the encryptor. In a similar fashion you can test-out your passwords like:
$ bin/activemq decrypt --password activemq --input eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
...
Decrypted text: mypassword
Note: It is recommended that you use only alphanumeric characters for
the password. Special characters, such as $/^&, are not supported.
The next step is to add the password to the appropriate configuration
file, $ACTIVEMQ_HOME/conf/credentials-enc.properties by default.
activemq.username=system
activemq.password=ENC(mYRkg+4Q4hua1kvpCCI2hg==)
guest.password=ENC(Cf3Jf3tM+UrSOoaKU50od5CuBa8rxjoL)
...
jdbc.password=ENC(eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp)
You probably don't even have to rebuilt your image so it contains the appropriate configuration file with encrypted password. You can add it as ConfigMap data to a volume. You can read how to do that here so I'll rather avoid another copy-pasting from documentation. Alternatively you may want to use secret volume. It's not the most important point here as it is just a way of substituting your original ActiveMQ configuration file in your Pod by your custom configuration file and you probably already know how to do that.
There is one more step on ActiveMQ side to configure. This config file can be also passed via ConfigMaP like in the previous example.
Finally, you need to instruct your property loader to encrypt
variables when it loads properties to the memory. Instead of standard
property loader we’ll use the special one (see
\$ACTIVEMQ_HOME/conf/activemq-security.xml) to achieve this.
<bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
<property name="algorithm" value="PBEWithMD5AndDES" />
<property name="passwordEnvName" value="ACTIVEMQ\_ENCRYPTION\_PASSWORD" />
</bean>
<bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
<property name="config" ref="environmentVariablesConfiguration" />
</bean>
<bean id="propertyConfigurer" class="org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer">
<constructor-arg ref="configurationEncryptor" />
<property name="location" value="file:${activemq.base}/conf/credentials-enc.properties"/>
</bean>
This way we instructed our ActiveMQ to load our encryptor password from the ACTIVEMQ_ENCRYPTION_PASSWORD environment variable and then use it to decrypt passwords from credential-enc.properties file.
Now let's take care about ACTIVEMQ_ENCRYPTION_PASSWORD env var content.
We can set such environment variable in our Pod via Secret. First we need to create one. Then we need to use it as environment variable.
I hope it helps.
It seems like this active mq dockerfile does not provide much in this regard. But it notes that you can specify the location of configuration files on the host system. You would have to prepare these files:
By default data and configuration is stored inside the container and will be lost after the container has been shut down and removed. To persist these files you can mount these directories to directories on your host system:
docker run -p 61616:61616 -p 8161:8161 \
-v /your/persistent/dir/conf:/opt/activemq/conf \
-v /your/persistent/dir/data:/opt/activemq/data \
rmohr/activemq
But maybe you can use a different active mq container implementation? This one seems to provide the credentials configuration via environment variables just like you are using for the other containers: https://hub.docker.com/r/webcenter/activemq

How do you enable Feature Gates in K8s?

I need to enable a few Feature Gates on my bare-metal K8s cluster(v1.13). I've tried using the kubelet flag --config to enable them, as kubelet --feature-gates <feature gate> throws an error saying that the feature has been deprecated.
I've created a .yml file with the following configuration:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
feature-gates:
VolumeSnapshotDataSource=true
and after running: "kubelet --config , I got the following error:
I0119 21:59:52.987945 29087 server.go:417] Version: v1.14.2
I0119 21:59:52.988165 29087 plugins.go:103] No cloud provider specified.
W0119 21:59:52.988188 29087 server.go:556] standalone mode, no API client
F0119 21:59:52.988203 29087 server.go:265] failed to run Kubelet: no client provided, cannot use webhook authentication
Does anyone know what could be happening and how to fix this problem?
You don't apply --feature-gates to the kubelet. You do it to the API-server. Depending on how have you installed kubernetes on bare metal, you would need to either stop API-server, edit the command you start it with and add the following parameter:
--feature-gates=VolumeSnapshotDataSource=true
Or, if it is in a pod, find the manifest, edit it and re-deploy it (it should happen automatically, once you finish editing). It should look like this:
...
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.132.0.48
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --feature-gates=VolumeSnapshotDataSource=true
image: k8s.gcr.io/kube-apiserver:v1.16.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.132.0.48
path: /healthz
port: 6443
scheme: HTTPS
...
It(VolumeSnapshotDataSource) is a default feature in 1.17 beta releases. It needs to be enabled in API server if the kubernetes version is less than 1.17.

http request always blocked in container of k8s cluster pod

Stages:
connect a container's shell
curl www.xxx.com (//this will always waiting )
...
Then I use tcpdump in host machine and filter by ip
tcpdump -i eth0 host ip
3 11:05:05 2019/12/2 133.5701630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=......S., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843476, Ack=0, Win=29200 ( Negotiating scale factor 0x7 ) = 29200
4 11:05:05 2019/12/2 133.5704230 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156738, Ack=126843477, Win=2896 ( Negotiated scale factor 0x9 ) = 1482752
5 11:05:05 2019/12/2 133.5704630 10.171.162.231 111.111.222.333 TCP TCP: [Bad CheckSum]Flags=...A...., SrcPort=48836, DstPort=HTTP(80), PayloadLen=0, Seq=126843477, Ack=3228156739, Win=229 (scale factor 0x7) = 29312
6 11:05:05 2019/12/2 133.5705430 10.171.162.231 111.111.222.333 HTTP HTTP:Request, GET /api/test, Query:debug
7 11:05:05 2019/12/2 133.5707110 111.111.222.333 10.171.162.231 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=48836, PayloadLen=0, Seq=3228156739, Ack=126843596, Win=6 (scale factor 0x9) = 3072
The tcp flag is
src -> dst syn
dst -> src syn/ack
src -> dst ack
src -> dst ack/push
dst -> src ack
The curl command will waiting a long time and then throw a timeout error. in normal request there has a dst -> src ack/push packet, but I never received.
I don't know why and how to resolve it.
--- my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-dep
labels:
app: test-app
version: stable
spec:
replicas: 2
selector:
matchLabels:
app: test-app
version: stable
template:
metadata:
labels:
app: test-app
version: stable
spec:
containers:
- image: test-app
name: test-app
livenessProbe:
httpGet:
path: /health/status
port: 80
initialDelaySeconds: 3
periodSeconds: 10
ports:
- containerPort: 80