ActiveMQ running in Kubernetes minikube: how to configure admin password - kubernetes

I am setting up a minikube which contains an activeMQ message queue together with InfluxDB and Grafana.
For Grafana, I was able to set the admin password via the deployment:
containers:
- env:
- name: GF_INSTALL_PLUGINS
value: grafana-piechart-panel, blackmirror1-singlestat-math-panel
- name: GF_SECURITY_ADMIN_USER
value: <grafanaadminusername>
- name: GF_SECURITY_ADMIN_PASSWORD
value: <grafanaadminpassword>
image: grafana/grafana:6.6.0
name: grafana
volumeMounts:
- mountPath: /etc/grafana/provisioning
name: grafana-volume
subPath: provisioning/
- mountPath: /var/lib/grafana/dashboards
name: grafana-volume
subPath: dashboards/
- mountPath: /etc/grafana/grafana.ini
name: grafana-volume
subPath: grafana.ini
readOnly: true
restartPolicy: Always
volumes:
- name: grafana-volume
hostPath:
path: /grafana
For influxdb I set the user/passwd via a secret:
apiVersion: v1
kind: Secret
metadata:
name: influxdb
namespace: default
type: Opaque
stringData:
INFLUXDB_CONFIG_PATH: /etc/influxdb/influxdb.conf
INFLUXDB_ADMIN_USER: <influxdbadminuser>
INFLUXDB_ADMIN_PASSWORD: <influxdbbadminpassword>
INFLUXDB_DB: <mydb>
Currently, my ActiveMQ deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: web
image: rmohr/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
- containerPort: 8161
resources:
limits:
memory: 512Mi
How do I achieve the similar result (password and admin user via config file) for ActiveMQ? Even better if this is achieved via encrypted secret, which I didn't manage yet in case of influxDB and Grafana

I would do this the following way:
Here you have nicely described encrypted passwords in ActiveMQ.
First you need to prepare such encrypted password. ActiveMQ has a built-in utility for that:
As of ActiveMQ 5.4.1 you can encrypt your passwords and safely store
them in configuration files. To encrypt the password, you can use the
newly added encrypt command like:
$ bin/activemq encrypt --password activemq --input mypassword
...
Encrypted text: eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
Where the password you want to encrypt is passed with the input argument, while the password argument is a secret used by the encryptor. In a similar fashion you can test-out your passwords like:
$ bin/activemq decrypt --password activemq --input eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
...
Decrypted text: mypassword
Note: It is recommended that you use only alphanumeric characters for
the password. Special characters, such as $/^&, are not supported.
The next step is to add the password to the appropriate configuration
file, $ACTIVEMQ_HOME/conf/credentials-enc.properties by default.
activemq.username=system
activemq.password=ENC(mYRkg+4Q4hua1kvpCCI2hg==)
guest.password=ENC(Cf3Jf3tM+UrSOoaKU50od5CuBa8rxjoL)
...
jdbc.password=ENC(eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp)
You probably don't even have to rebuilt your image so it contains the appropriate configuration file with encrypted password. You can add it as ConfigMap data to a volume. You can read how to do that here so I'll rather avoid another copy-pasting from documentation. Alternatively you may want to use secret volume. It's not the most important point here as it is just a way of substituting your original ActiveMQ configuration file in your Pod by your custom configuration file and you probably already know how to do that.
There is one more step on ActiveMQ side to configure. This config file can be also passed via ConfigMaP like in the previous example.
Finally, you need to instruct your property loader to encrypt
variables when it loads properties to the memory. Instead of standard
property loader we’ll use the special one (see
\$ACTIVEMQ_HOME/conf/activemq-security.xml) to achieve this.
<bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
<property name="algorithm" value="PBEWithMD5AndDES" />
<property name="passwordEnvName" value="ACTIVEMQ\_ENCRYPTION\_PASSWORD" />
</bean>
<bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
<property name="config" ref="environmentVariablesConfiguration" />
</bean>
<bean id="propertyConfigurer" class="org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer">
<constructor-arg ref="configurationEncryptor" />
<property name="location" value="file:${activemq.base}/conf/credentials-enc.properties"/>
</bean>
This way we instructed our ActiveMQ to load our encryptor password from the ACTIVEMQ_ENCRYPTION_PASSWORD environment variable and then use it to decrypt passwords from credential-enc.properties file.
Now let's take care about ACTIVEMQ_ENCRYPTION_PASSWORD env var content.
We can set such environment variable in our Pod via Secret. First we need to create one. Then we need to use it as environment variable.
I hope it helps.

It seems like this active mq dockerfile does not provide much in this regard. But it notes that you can specify the location of configuration files on the host system. You would have to prepare these files:
By default data and configuration is stored inside the container and will be lost after the container has been shut down and removed. To persist these files you can mount these directories to directories on your host system:
docker run -p 61616:61616 -p 8161:8161 \
-v /your/persistent/dir/conf:/opt/activemq/conf \
-v /your/persistent/dir/data:/opt/activemq/data \
rmohr/activemq
But maybe you can use a different active mq container implementation? This one seems to provide the credentials configuration via environment variables just like you are using for the other containers: https://hub.docker.com/r/webcenter/activemq

Related

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Pass postgres parameter into Kubernetes deployment

I am trying to set a postgres parameter (shared_buffers) into my postgres database pod. I am trying to set an init container to set the db variable, but it is not working because the init container runs as the root user.
What is the best way to edit the db variable on the pods? I do not have the ability to make the change within the image, because the variable needs to be different for different instances. If it helps, the command I need to run is a "postgres -c" command.
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
You didn't share your Pod/Deployment definition, but I believe you want to set shared_buffers from the command line of the actual container (not the init container) in your Pod definition. Something like this if you are using a deployment:
apiVersion: v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
imagePullPolicy: "IfNotPresent"
command: ["postgres"] # <-- add this
args: ["-D", "-c", "shared_buffers=128MB"] # <-- add this
ports:
- containerPort: 5432
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
mountPath: /var/lib/postgres/data/postgresql.conf
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim # <-- note: you need to have this already predefined
- name: postgresql-config-volume # <-- use if you are using a ConfigMap (see below)
configMap:
name: postgresql-config
Notice that if you are using a ConfigMap you can also do this (note that you may want to add more configuration options besides shared_buffers):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgresql-config
data:
postgresql.conf: |
shared_buffers=256MB
In my case, the #Rico answer didn't help me out of the box because I don't use postgres with a persistent storage mount, which means there is no /var/lib/postgresql/data folder and pre-existed database (so both proposed options have failed in my case).
To successfully apply postgres settings, I used only args (without command section).
In that case, k8s will pass these args to the default entrypoint defined in the docker image (docs), and as for postgres entrypoint, it is made so that any options passed to the docker command will be passed along to the postgres server daemon (look section Database Configuration at: https://hub.docker.com/_/postgres)
apiVersion: v1
kind: Pod
metadata:
name: postgres
spec:
containers:
- image: postgres:9.6.8
name: postgres
args: ["-c", "shared_buffers=256MB", "-c", "max_connections=207"]
To check that the settings applied:
$ kubectl exec -it postgres -- bash
root#postgres:/# su postgres
$ psql -c 'show max_connections;'
max_connections
-----------------
207
(1 row)

DataDog how to disable Redis integration

I've installed the DataDog agent on my Kubernetes cluster using the Helm chart (https://github.com/helm/charts/tree/master/stable/datadog).
This works very well except for one thing. I have a number of Redis containers that have passwords set. This seems to be causing issues for the DataDog agent because it can't connect to Redis without a password.
I would like to either disable monitoring Redis completely or somehow bypass the Redis authentication. If I leave it as is I get a lot of error messages in the DataDog container logs and the redisdb integration shows up in yellow in the DataDog dashboard.
What are my options here?
I am not a fan of helm, but you can accomplish this in 2 ways:
via env vars: make use of DD_AC_EXCLUDE variable to exclude the Redis containers: eg DD_AC_EXCLUDE=name:prefix-redis
via a config map: mount an empty config map in /etc/datadog-agent/conf.d/redisdb.d/, below is an example where I renamed the auto_conf.yaml to auto_conf.yaml.example.
apiVersion: v1
data:
auto_conf.yaml.example: |
ad_identifiers:
- redis init_config: instances:
## #param host - string - required
## Enter the host to connect to.
#
- host: "%%host%%" ## #param port - integer - required
## Enter the port of the host to connect to.
#
port: "6379"
conf.yaml.example: |
init_config: instances: ## #param host - string - required
## Enter the host to connect to.
# [removed content]
kind: ConfigMap
metadata:
creationTimestamp: null
name: redisdb-d
alter the daemonset/deployment object:
[....]
volumeMounts:
- name: redisdb-d
mountPath: /etc/datadog-agent/conf.d/redisdb.d
[...]
volumes:
- name: redisdb-d
configMap:
name: redisdb-d
[...]

Dynamic tagging for Fluentd td-agent source plugin

I'm trying to implement a Streaming Sidecar Container logging architecture in Kubernetes using Fluentd.
In a single pod I have:
emptyDir Volume (as log storage)
Application container
Fluent log-forwarder container
Basically, the Application container logs are stored in the shared emptyDir volume. Fluentd log-forwarder container tails this log file in the shared emptyDir volume and forwards it an external log-aggregator.
The Fluentd log-forwarder container uses the following config in td-agent.conf:
<source>
#type tail
tag "#{ENV['TAG_VALUE']}"
path (path to log file in volume)
pos_file /var/log/td-agent/tmp/access.log.pos
format json
time_key time
time_format %iso8601
keep_time_key true
</source>
<match *.*>
#type forward
#id forward_tail
heartbeat_type tcp
<server>
host (server-host-address)
</server>
</match>
I'm using an environment variable to set the tag value so I can change it dynamically e.g. when I have to use this container side-by-side with a different Application container, I don't have to modify this config and rebuild this image again.
Now, I set the environment variable value during pod creation in Kubernetes:
.
.
spec:
containers:
- name: application-pod
image: application-image:1.0
ports:
- containerPort: 1234
volumeMounts:
- name: logvolume
mountPath: /var/log/app
- name: log-forwarder
image: log-forwarder-image:1.0
env:
- name: "TAG_VALUE"
value: "app.service01"
volumeMounts:
- name: logvolume
mountPath: /var/log/app
volumes:
- name: logvolume
emptyDir: {}
After deploying the pod, I found that the tag value in the Fluentd log-forwarder container comes out empty (expected value: "app.service01"). I imagine it's because Fluentd's td-agent initializes first before the TAG_VALUE environment variable gets assigned.
So, the main question is...
How can I dynamically set the td-agent's tag value?
But really, what I'm wondering is:
Is it possible to assign an environment variable before a container's initialization in Kubernetes?
As an answer to your first question (How can I dynamically set the td-agent's tag value?), this seems the best way that you are doing which is defining tag "#{ENV['TAG_VALUE']}" inside fluentd config file.
For your second question, environment variable is assigned before a container's initialization.
So it means it should work and I tested with below sample yaml, and it just worked fine.
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-conf
data:
fluentd.conf.template: |
<source>
#type tail
tag "#{ENV['TAG_VALUE']}"
path /var/log/nginx/access.log
format nginx
</source>
<match *.*>
#type stdout
</match>
---
apiVersion: v1
kind: Pod
metadata:
name: log-forwarder
labels:
purpose: test-fluentd
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: logvolume
mountPath: /var/log/nginx
- name: fluentd
image: fluent/fluentd
env:
- name: "TAG_VALUE"
value: "test.nginx"
- name: "FLUENTD_CONF"
value: "fluentd.conf"
volumeMounts:
- name: fluentd-conf
mountPath: /fluentd/etc
- name: logvolume
mountPath: /var/log/nginx
volumes:
- name: fluentd-conf
configMap:
name: fluentd-conf
items:
- key: fluentd.conf.template
path: fluentd.conf
- name: logvolume
emptyDir: {}
restartPolicy: Never
And when I curl nginx pod, I see this output on fluentd containers stdout.
kubectl logs -f log-forwarder fluentd
2019-03-20 09:50:54.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:55.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:56.000000000 +0000 test.nginx: {"remote":"10.128.0.26","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
As you can see, my environment variable TAG_VALUE=test.nginx has applied to log entries.
I hope it will be useful.
You can use the combo fluent-plugin-kubernetes_metadata_filter and fluent-plugin-rewrite-tag-filter to set container name or something to the tag.

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}