Cannot connect to Artemis from another pod in kubernetes - kubernetes

I have two pods one is my ArtemisMQ pod and another is a consumer service.
I have ingress set up so that I can access the console which works, but the issue comes down to when my consumer pod is trying to access port 61616 for connections.
The error I get from my consumer pod
Could not refresh JMS Connection for destination 'PricingSave' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=unlimited}. Cause: Failed to create session factory; nested exception is ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219013: Timed out waiting to receive cluster topology. Group:null]
My broker config:
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>Artemis</name>
<broadcast-groups>
<broadcast-group name="brokerCluster-broadcast">
<local-bind-address>0.0.0.0</local-bind-address>
<local-bind-port>10000</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>20</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="brokerCluster-discovery">
<local-bind-port>10000</local-bind-port>
<local-bind-address>0.0.0.0</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="brokerCluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="brokerCluster-discovery"/>
</cluster-connection>
</cluster-connections>
<connectors>
<connector name="netty-connector">tcp://PODS_IP:61616</connector>
</connectors>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 2.84 writes per millisecond
on the current journal configuration.
That translates as a sync write every 352000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>352000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>484000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://PODS_IP:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://PODS_IP:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://PODS_IP:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://PODS_IP:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://PODS_IP:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>dlq</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="PricingSave.dlq">
<anycast>
<queue name="PricingSave.dlq"/>
</anycast>
</address>
<address name="PricingSave">
<anycast>
<queue name="PricingSave"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
</core>
</configuration>
My Artemis Service config:
kind: Service
apiVersion: v1
metadata:
labels:
name: amq
name: amq
namespace: default
spec:
selector:
app: amq
ports:
- name: web
port: 8161
protocol: TCP
targetPort: 8161
- name: stomp
port: 61613
protocol: TCP
targetPort: 61613
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: openwire
port: 61616
protocol: TCP
targetPort: 61616
- name: jmx
port: 9404
protocol: TCP
targetPort: 9404
- name: hornetq
port: 5445
protocol: TCP
targetPort: 5445
- name: amqp
port: 5672
protocol: TCP
targetPort: 5672
sessionAffinity: None
type: ClusterIP
My Artemis deployment Config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: amq
name: amq
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: amq
strategy:
type: Recreate
template:
metadata:
annotations:
linkerd.io/inject: enabled
labels:
app: amq
spec:
containers:
- image: REPOSITORY/IMAGE
name: amq
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: CONTAINER_CORE_LIMIT
valueFrom:
resourceFieldRef:
resource: limits.cpu
- name: CONTAINER_MAX_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
ports:
- containerPort: 8161
name: web
protocol: TCP
- containerPort: 61613
name: stomp
protocol: TCP
- containerPort: 1883
name: mqtt
protocol: TCP
- containerPort: 61616
name: openwire
protocol: TCP
- containerPort: 9404
name: jmx
protocol: TCP
- containerPort: 5445
name: hornetq
protocol: TCP
- containerPort: 5672
name: amqp
protocol: TCP
imagePullPolicy: Always
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 400m
memory: 1012Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
restartPolicy: Always
imagePullSecrets:
- name: my-registry...

Using 0.0.0.0 for a connector is not valid. In fact, there should be a WARN level message in the log about it. Your cluster will not form properly with such a configuration. You need to use a hostname or IP address that other hosts can use to reach the broker that broadcasts it. That's the whole point of the broadcast-group - to tell other brokers who may be listening how they can connect to it.

After some more testing locally on minikube which worked, I rebuilt my cluster and redeployed.
I decided not to deploy nor reconfigure linkerd on this new cluster and everything is now working fine.
So the issue I've had must have been done to some proxy settings to do with linkerd. I'll follow this post up when I find a solution to get connections through linkerd.

Related

Spark Operator unable to download dependency jar from password protected nexus repository

Spark Operator/spark submit unable to download dependency jar from password protected nexus repository
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-pi
namespace: default
spec:
type: Scala
mode: cluster
image: "gcr.io/spark-operator/spark:v3.1.1"
imagePullPolicy: Always
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: "local:///opt/spark/examples/my-main.jar"
sparkVersion: "3.1.1"
deps:
jars:
- http://nexus-url/mydep.jar
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: spark
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.1.1
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
I have tried providing deps.jars with user:password like:
http://user:password#nexushosh/mydep.jar
Tried ivy settings:
spec:
conf:
spark.jars.ivySettings: /tmp/ivy/ivysettings.xml
#ivysettings mounted at path with following content
<ivysettings>
<settings defaultResolver="nexus" />
<credentials host="nexus.host.com" realm="Sonatype Nexus Repository Manager" username="username" passwd="mypassword" />
<property name="nexus-public" value="https://nexus.host.com/repository/public" />
<property name="nexus-releases" value="https://nexus.host.com/repository/releases" />
<property name="nexus-snapshots" value="https://nexus.host.com/repository/snapshots" />
<resolvers>
<ibiblio name="nexus" m2compatible="true" root="${nexus-public}" />
<ibiblio name="nexus-snapshots" m2compatible="true" root="${nexus-snapshots}" />
<ibiblio name="nexus-releases" m2compatible="true" root="${nexus-releases}" />
</resolvers>
</ivysettings>
Note: My Nexus repository is of type RAW and I have just uploaded jar to repo
No network block as able to download when repo is public

How to use Apache ActiveMQ Artemis in Kubernetes

I have an issue where I have a workload in Kubernetes which contains an Apache ActiveMQ Artemis broker. The server starts properly when I have a single pod in my workload, the issue starts when I try to scale them. The brokers in the pods can't connect to each other, so I can't scale my work load. My final goal is to make it scalable. I tried it locally with two docker containers and it worked fine.
Here is my broker.xml:
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>Broker1</name>
<broadcast-groups>
<broadcast-group name="brokerCluster-broadcast">
<local-bind-address>0.0.0.0</local-bind-address>
<local-bind-port>10000</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>20</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="brokerCluster-discovery">
<local-bind-port>10000</local-bind-port>
<local-bind-address>0.0.0.0</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="brokerCluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="brokerCluster-discovery"/>
</cluster-connection>
</cluster-connections>
<connectors>
<connector name="netty-connector">tcp://0.0.0.0:61610</connector>
</connectors>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>536000</journal-buffer-timeout>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>536000</page-sync-timeout>
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61610</acceptor>
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<redistribution-delay>0</redistribution-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="TestQueue">
<anycast>
<queue name="testQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
Edit: Attached kubernetes,docker configs
deployment.yml
apiVersion: v1
kind: Service
metadata:
name: artemis
labels:
app: artemis
spec:
ports:
- port: 6161
name: service
protocol: UDP
- port: 8161
name: console
protocol: UDP
- port: 9876
name: broadcast
protocol: UDP
- port: 61610
name: netty-connector
protocol: TCP
- port: 5672
name: acceptor-amqp
protocol: TCP
- port: 61613
name: acceptor-stomp
protocol: TCP
- port: 5445
name: accep-hornetq
protocol: TCP
- port: 1883
name: acceptor-mqt
protocol: TCP
- port: 10000
protocol: UDP
name: brokercluster-broadcast // this name is invalid but i wanted to match it to my broker.xml
clusterIP: None
selector:
app: artemis01
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: artemis01headless
namespace: artemis
spec:
selector:
matchLabels:
app: artemis01
serviceName: artemis01
replicas: 3
template:
metadata:
labels:
app: artemis01
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- worker
containers:
- env:
- name: ARTEMIS_PASSWORD
value: admin
- name: ARTEMIS_USER
value: admin
- name: MY_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
name: artemis
image:
ports:
- containerPort: 6161
name: service
protocol: UDP
- containerPort: 8161
name: console
protocol: UDP
- containerPort: 9876
name: broadcast
protocol: UDP
- containerPort: 61610
name: netty-connector
protocol: TCP
- containerPort: 5672
name: acceptor-amqp
protocol: TCP
- containerPort: 61613
name: acceptor-stomp
protocol: TCP
- containerPort: 5445
name: accep-hornetq
protocol: TCP
- containerPort: 1883
name: acceptor-mqtt
protocol: TCP
- containerPort: 10000
name: brokercluster-broadcast
protocol: UDP
imagePullSecrets:
- name: xxxxxxx
Dockerfile source
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# ActiveMQ Artemis
FROM jboss/base-jdk:8
LABEL maintainer="Apache ActiveMQ Team"
# Make sure pipes are considered to determine success, see: https://github.com/hadolint/hadolint/wiki/DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
WORKDIR /opt
ENV ARTEMIS_USER artemis
ENV ARTEMIS_PASSWORD artemis
ENV ANONYMOUS_LOGIN false
ENV CREATE_ARGUMENTS --user ${ARTEMIS_USER} --password ${ARTEMIS_PASSWORD} --silent --http-host 0.0.0.0 --relax-jolokia
USER root
# add user and group for artemis
RUN groupadd -g 1001 -r artemis && useradd -r -u 1001 -g artemis artemis \
&& yum install -y libaio && yum -y clean all
USER artemis
ADD . /opt/activemq-artemis
# Web Server
EXPOSE 8161 \
61610 \
9876 \
61613 \
61616 \
5672 \
5445 \
1883 \
10000
USER root
RUN mkdir /var/lib/artemis-instance && chown -R artemis.artemis /var/lib/artemis-instance
COPY ./docker/docker-run.sh /
USER artemis
# Expose some outstanding folders
VOLUME ["/var/lib/artemis-instance"]
WORKDIR /var/lib/artemis-instance
ENTRYPOINT ["/docker-run.sh"]
CMD ["run"]
run.sh
set -e
BROKER_HOME=/var/lib/
CONFIG_PATH=$BROKER_HOME/etc
export BROKER_HOME OVERRIDE_PATH CONFIG_PATH
echo CREATE_ARGUMENTS=${CREATE_ARGUMENTS}
if ! [ -f ./etc/broker.xml ]; then
/opt/activemq-artemis/bin/artemis create ${CREATE_ARGUMENTS} .
#the script copies my broker.xml to /var/lib/artemis-instance/etc/broker.xml here.
sed -i -e 's|$PLACEHOLDERIP|'$MY_POD_IP'|g' /var/lib/artemis-instance/etc/broker.xml
else
echo "broker already created, ignoring creation"
fi
exec ./bin/artemis "$#"
I believe the issue is with your connector configuration. This is what you're using:
<connector name="netty-connector">tcp://0.0.0.0:61610</connector>
The information from this connector gets broadcast to the other cluster members since you've specified it in the <connector-ref> of your <cluster-connection>. The other cluster members then try to use this information to connect back to the node that broadcast it. However, 0.0.0.0 won't make sense to a remote client.
The address 0.0.0.0 is a meta-address. In the context of a listener (e.g. an Artemis acceptor) it means that the listener will listen for connections on all local addresses. In the context of a connector it doesn't really have a meaning. See this article for more about 0.0.0.0.
You should be using a real IP address or hostname that a client can use to actually get a network route to the server.
Also, since you're using UDP multicast (i.e. via the <broadcast-group> and <discovery-group>) please ensure this functions as well between the containers/pods. If you can't get UDP multicast working in your environment (or simply don't want to) you could switch to a static cluster configuration. Refer to the documentation and the "clustered static discovery" example for details on how to configure this.

ActiveMQ running in Kubernetes minikube: how to configure admin password

I am setting up a minikube which contains an activeMQ message queue together with InfluxDB and Grafana.
For Grafana, I was able to set the admin password via the deployment:
containers:
- env:
- name: GF_INSTALL_PLUGINS
value: grafana-piechart-panel, blackmirror1-singlestat-math-panel
- name: GF_SECURITY_ADMIN_USER
value: <grafanaadminusername>
- name: GF_SECURITY_ADMIN_PASSWORD
value: <grafanaadminpassword>
image: grafana/grafana:6.6.0
name: grafana
volumeMounts:
- mountPath: /etc/grafana/provisioning
name: grafana-volume
subPath: provisioning/
- mountPath: /var/lib/grafana/dashboards
name: grafana-volume
subPath: dashboards/
- mountPath: /etc/grafana/grafana.ini
name: grafana-volume
subPath: grafana.ini
readOnly: true
restartPolicy: Always
volumes:
- name: grafana-volume
hostPath:
path: /grafana
For influxdb I set the user/passwd via a secret:
apiVersion: v1
kind: Secret
metadata:
name: influxdb
namespace: default
type: Opaque
stringData:
INFLUXDB_CONFIG_PATH: /etc/influxdb/influxdb.conf
INFLUXDB_ADMIN_USER: <influxdbadminuser>
INFLUXDB_ADMIN_PASSWORD: <influxdbbadminpassword>
INFLUXDB_DB: <mydb>
Currently, my ActiveMQ deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: web
image: rmohr/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
- containerPort: 8161
resources:
limits:
memory: 512Mi
How do I achieve the similar result (password and admin user via config file) for ActiveMQ? Even better if this is achieved via encrypted secret, which I didn't manage yet in case of influxDB and Grafana
I would do this the following way:
Here you have nicely described encrypted passwords in ActiveMQ.
First you need to prepare such encrypted password. ActiveMQ has a built-in utility for that:
As of ActiveMQ 5.4.1 you can encrypt your passwords and safely store
them in configuration files. To encrypt the password, you can use the
newly added encrypt command like:
$ bin/activemq encrypt --password activemq --input mypassword
...
Encrypted text: eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
Where the password you want to encrypt is passed with the input argument, while the password argument is a secret used by the encryptor. In a similar fashion you can test-out your passwords like:
$ bin/activemq decrypt --password activemq --input eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
...
Decrypted text: mypassword
Note: It is recommended that you use only alphanumeric characters for
the password. Special characters, such as $/^&, are not supported.
The next step is to add the password to the appropriate configuration
file, $ACTIVEMQ_HOME/conf/credentials-enc.properties by default.
activemq.username=system
activemq.password=ENC(mYRkg+4Q4hua1kvpCCI2hg==)
guest.password=ENC(Cf3Jf3tM+UrSOoaKU50od5CuBa8rxjoL)
...
jdbc.password=ENC(eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp)
You probably don't even have to rebuilt your image so it contains the appropriate configuration file with encrypted password. You can add it as ConfigMap data to a volume. You can read how to do that here so I'll rather avoid another copy-pasting from documentation. Alternatively you may want to use secret volume. It's not the most important point here as it is just a way of substituting your original ActiveMQ configuration file in your Pod by your custom configuration file and you probably already know how to do that.
There is one more step on ActiveMQ side to configure. This config file can be also passed via ConfigMaP like in the previous example.
Finally, you need to instruct your property loader to encrypt
variables when it loads properties to the memory. Instead of standard
property loader we’ll use the special one (see
\$ACTIVEMQ_HOME/conf/activemq-security.xml) to achieve this.
<bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
<property name="algorithm" value="PBEWithMD5AndDES" />
<property name="passwordEnvName" value="ACTIVEMQ\_ENCRYPTION\_PASSWORD" />
</bean>
<bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
<property name="config" ref="environmentVariablesConfiguration" />
</bean>
<bean id="propertyConfigurer" class="org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer">
<constructor-arg ref="configurationEncryptor" />
<property name="location" value="file:${activemq.base}/conf/credentials-enc.properties"/>
</bean>
This way we instructed our ActiveMQ to load our encryptor password from the ACTIVEMQ_ENCRYPTION_PASSWORD environment variable and then use it to decrypt passwords from credential-enc.properties file.
Now let's take care about ACTIVEMQ_ENCRYPTION_PASSWORD env var content.
We can set such environment variable in our Pod via Secret. First we need to create one. Then we need to use it as environment variable.
I hope it helps.
It seems like this active mq dockerfile does not provide much in this regard. But it notes that you can specify the location of configuration files on the host system. You would have to prepare these files:
By default data and configuration is stored inside the container and will be lost after the container has been shut down and removed. To persist these files you can mount these directories to directories on your host system:
docker run -p 61616:61616 -p 8161:8161 \
-v /your/persistent/dir/conf:/opt/activemq/conf \
-v /your/persistent/dir/data:/opt/activemq/data \
rmohr/activemq
But maybe you can use a different active mq container implementation? This one seems to provide the credentials configuration via environment variables just like you are using for the other containers: https://hub.docker.com/r/webcenter/activemq

How to secure Kibana dashboard using keycloak-gatekeeper?

Current flow:
incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana
Expected flow:
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
-->
keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:
keycloak-gatekeeper.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak-gatekeeper
name: keycloak-gatekeeper
spec:
selector:
matchLabels:
app: keycloak-gatekeeper
replicas: 1
template:
metadata:
labels:
app: keycloak-gatekeeper
spec:
containers:
- image: keycloak/keycloak-gatekeeper
imagePullPolicy: Always
name: keycloak-gatekeeper
ports:
- containerPort: 3000
args:
- "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml"
- "--enable-logging=true"
- "--enable-json-logging=true"
- "--verbose=true"
volumeMounts:
-
mountPath: /keycloak-proxy-poc/keycloak-gatekeeper
name: secrets
volumes:
- name: secrets
secret:
secretName: gatekeeper
gatekeeper.yaml
discovery-url: https://keycloak/auth/realms/MyRealm
enable-default-deny: true
listen: 0.0.0.0:3000
upstream-url: https://kibana.k8s.cluster:5601
client-id: kibana
client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952
Envoy.yaml
- name: kibana
hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}]
Problem:
I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.
Note: Kibana is also running as k8s cluster.
References:
https://medium.com/#vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382
https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d
Update 1:
I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:
Step 1. Clicked on http://something/sso-kibana
Step 2. Keycloak login page opens at https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth?...
Step 3. After entering credentials redirected to this URL https://something/sso-kibana/oauth/callback?state=890cd02c-f...
Step 4. 404
Update 2:
404 error was solved after I added a new route in Envoy.yaml
Envoy.yaml
- match: { prefix: /sso-kibana/oauth/callback }
route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster }
Therefore, Expected flow (as shown below) is working fine now.
incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
--> keycloak
--> If not logged in --> keycloak loging page --> /sso-kibana
--> If Already logged in --> /sso-kibana
In your config you explicitly enabled enable-default-deny which is explained in the documentation as:
enables a default denial on all requests, you have to explicitly say what is permitted (recommended)
With that enabled, you will need to specify urls, methods etc. either via resources entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:
resources:
- uri: /app/*
[1] https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration
[2] https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing

How to inject kubernetes secret into configuration file

I have one configuration file which as following. This file is a configmap and will be mounted and read by my app. The problem here is that this configuration file has one property with my db password. And I don't want to it to be exposed. So is there anyway to inject kubernetes secret into such configuration file. Thanks
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>my_db_password</value>
</property>
You can use a combination of an init container an a shared volume for this, if you don't want to expose the secret to the application container directly.
The init container uses the secret to create the configuration file from a template (f.e. sed replacing a placeholder) and place the file in a shared volume. The application container uses the volume to retrieve the file. (Given that you can configure the path where the application expects the configuration file.)
The other option is to simply use the secret as an environment variable for your application and retrieve it separately from the general configuration.
try below steps
1. add the password as an environment variable
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>${my_db_password}</value>
</property>
2. include the password in secret object
3. load the env variable from secret object. you need to define env from secret object ref in pod definition
The issue is that XML will not expand that variable. Not sure if it fits your use case but we had a JVM application with some XML configuration and did the following in order to make it work:
Create Secret
Reference Secret in the Depoyment environment variables
Inject them as System Properties in a JAVA_OPT variable
System properties get expanded
Example
Deployment file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myimage
ports:
- containerPort: 8080
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: my-secret-credentials
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret-credentials
key: password
- name: JAVA_OPTS
value: "-db.user=$(DB_USER) -Ddb.password=$(DB_PASSWORD)"
Your XML config file:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>"#{systemProperties['db.user']}"</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>"#{systemProperties['db.password']}"</value>
</property>
This way your secret gets injected safely. Just pay attention when referencing environment variables from another environment variable in the deployment yaml, it uses parenthesis instead of curly braces.
Hope that helps
I don't know if this approach is working on Hadoop 2.
In Hadoop 3+ I used the following configuration for core-site.xml and hive-metastore.xml to set the config values from environment variables:
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>${env.HADOOP_DEFAULT_FS}</value>
</property>
metastore-site.xml:
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>${env.METASTORE_PASSWORD}</value>
</property>
Where HADOOP_DEFAULT_FS and METASTORE_PASSWORD are defined into a k8s secret which is attached to the container as env variables.
This is how I tried to solve the same problem.
I tried to avoid sed, eval or any other solution that is not secure (special chars or similar issue).
Create a secret that contains a config file (in your case it will be xml):
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |
apiUrl: "https://my.api.com/api/v1"
username: <user>
password: <password>
and then mount the secret as file:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
optional: false # default setting; "mysecret" must exist
https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod