I have one configuration file which as following. This file is a configmap and will be mounted and read by my app. The problem here is that this configuration file has one property with my db password. And I don't want to it to be exposed. So is there anyway to inject kubernetes secret into such configuration file. Thanks
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>my_db_password</value>
</property>
You can use a combination of an init container an a shared volume for this, if you don't want to expose the secret to the application container directly.
The init container uses the secret to create the configuration file from a template (f.e. sed replacing a placeholder) and place the file in a shared volume. The application container uses the volume to retrieve the file. (Given that you can configure the path where the application expects the configuration file.)
The other option is to simply use the secret as an environment variable for your application and retrieve it separately from the general configuration.
try below steps
1. add the password as an environment variable
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>${my_db_password}</value>
</property>
2. include the password in secret object
3. load the env variable from secret object. you need to define env from secret object ref in pod definition
The issue is that XML will not expand that variable. Not sure if it fits your use case but we had a JVM application with some XML configuration and did the following in order to make it work:
Create Secret
Reference Secret in the Depoyment environment variables
Inject them as System Properties in a JAVA_OPT variable
System properties get expanded
Example
Deployment file:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myimage
ports:
- containerPort: 8080
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: my-secret-credentials
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-secret-credentials
key: password
- name: JAVA_OPTS
value: "-db.user=$(DB_USER) -Ddb.password=$(DB_PASSWORD)"
Your XML config file:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>"#{systemProperties['db.user']}"</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>"#{systemProperties['db.password']}"</value>
</property>
This way your secret gets injected safely. Just pay attention when referencing environment variables from another environment variable in the deployment yaml, it uses parenthesis instead of curly braces.
Hope that helps
I don't know if this approach is working on Hadoop 2.
In Hadoop 3+ I used the following configuration for core-site.xml and hive-metastore.xml to set the config values from environment variables:
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>${env.HADOOP_DEFAULT_FS}</value>
</property>
metastore-site.xml:
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>${env.METASTORE_PASSWORD}</value>
</property>
Where HADOOP_DEFAULT_FS and METASTORE_PASSWORD are defined into a k8s secret which is attached to the container as env variables.
This is how I tried to solve the same problem.
I tried to avoid sed, eval or any other solution that is not secure (special chars or similar issue).
Create a secret that contains a config file (in your case it will be xml):
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |
apiUrl: "https://my.api.com/api/v1"
username: <user>
password: <password>
and then mount the secret as file:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
optional: false # default setting; "mysecret" must exist
https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-config-file/
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
Related
In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.
Is there a way to set a kubernetes secret key name when using --from-file other than the filename?
I have a bunch of different configuration files that I use as secrets.json within my containers. However, to organize my files, none of them are named secrets.json on my host. For example secrets.dev.json or secrets.test.json. My apps only know to read in secrets.json.
When I create a secret with kubectl create secret generic my-app-secrets --from-file=secrets.dev.json, this results in the key name being secrets.dev.json and not secrets.json.
I'm mounting in my secret contents as a file (this is a carry-over from migrating from Docker swarm).
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
volumes:
- name: my-secret
secret:
secretName: my-app-secrets
containers:
- name: my-app
volumeMounts:
- name: my-secret
mountPath: "/run/secrets/secrets.json"
subPath: "secrets.json"
Because secrets.json doesn't exist as a key because it used the filename (secrets.dev.json), it ends up getting turned into a directory instead. I end up getting this mount path: /run/secrets/secrets.json/secrets.dev.json.
I'd like to be able to set the key name to secrets.json instead of using the filename of secrets.dev.json.
You can specify key name [--from-file=[key=]source]
kubectl create secret generic my-app-secrets --from-file=secrets.json=secrets.dev.json
Here, secrets.json is key name and secrets.dev.json is source
I have some CATALINA_OPTS properties (regarding database port, user and so on) set up in ConfigMap file. Then, this file is added to the docker image via Pod environment variable.
One of the CATALINA_OPTS properties is database password, and it is required to move this from ConfigMap to the Secrets file.
I can expose key from Secrets file through environment variable:
apiVersion: v1
kind: Pod
...
containers:
- name: myContainer
image: myImage
env:
- name: CATALINA_OPTS
valueFrom:
configMapKeyRef:
name: catalina_opts
key: CATALINA_OPTS
- name: MY_ENV_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: my-pass
Thing is, i need to append this password to the CATALINA_OPTS. I tried to do it in Dockerfile:
RUN export CATALINA_OPTS="$CATALINA_OPTS -Dmy.password=$MY_ENV_PASSWORD"
However, MY_ENV_PASSWORD is not appending to the existing CATALINA_OPTS. When I list my environment variables (i'm checking the log in Jenkins) i cannot see the password.
Am I doing something wrong here? Is there any 'regular' way to do this?
Dockerfile RUN steps are run as part of your image build step and NOT during your image execution. Hence, you cannot rely on RUN export (build step) to set K8S environment variables for your container (run step).
Remove the RUN export from your Dockerfile and Ensure you are setting CATALINA_OPTS in your catalina_opts ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: catalina_opts
data:
SOME_ENV_VAR: INFO
CATALINA_OPTS: opts... -Dmy.password=$MY_ENV_PASSWORD
I am setting up a minikube which contains an activeMQ message queue together with InfluxDB and Grafana.
For Grafana, I was able to set the admin password via the deployment:
containers:
- env:
- name: GF_INSTALL_PLUGINS
value: grafana-piechart-panel, blackmirror1-singlestat-math-panel
- name: GF_SECURITY_ADMIN_USER
value: <grafanaadminusername>
- name: GF_SECURITY_ADMIN_PASSWORD
value: <grafanaadminpassword>
image: grafana/grafana:6.6.0
name: grafana
volumeMounts:
- mountPath: /etc/grafana/provisioning
name: grafana-volume
subPath: provisioning/
- mountPath: /var/lib/grafana/dashboards
name: grafana-volume
subPath: dashboards/
- mountPath: /etc/grafana/grafana.ini
name: grafana-volume
subPath: grafana.ini
readOnly: true
restartPolicy: Always
volumes:
- name: grafana-volume
hostPath:
path: /grafana
For influxdb I set the user/passwd via a secret:
apiVersion: v1
kind: Secret
metadata:
name: influxdb
namespace: default
type: Opaque
stringData:
INFLUXDB_CONFIG_PATH: /etc/influxdb/influxdb.conf
INFLUXDB_ADMIN_USER: <influxdbadminuser>
INFLUXDB_ADMIN_PASSWORD: <influxdbbadminpassword>
INFLUXDB_DB: <mydb>
Currently, my ActiveMQ deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq
spec:
replicas: 1
selector:
matchLabels:
app: activemq
template:
metadata:
labels:
app: activemq
spec:
containers:
- name: web
image: rmohr/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
- containerPort: 8161
resources:
limits:
memory: 512Mi
How do I achieve the similar result (password and admin user via config file) for ActiveMQ? Even better if this is achieved via encrypted secret, which I didn't manage yet in case of influxDB and Grafana
I would do this the following way:
Here you have nicely described encrypted passwords in ActiveMQ.
First you need to prepare such encrypted password. ActiveMQ has a built-in utility for that:
As of ActiveMQ 5.4.1 you can encrypt your passwords and safely store
them in configuration files. To encrypt the password, you can use the
newly added encrypt command like:
$ bin/activemq encrypt --password activemq --input mypassword
...
Encrypted text: eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
Where the password you want to encrypt is passed with the input argument, while the password argument is a secret used by the encryptor. In a similar fashion you can test-out your passwords like:
$ bin/activemq decrypt --password activemq --input eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp
...
Decrypted text: mypassword
Note: It is recommended that you use only alphanumeric characters for
the password. Special characters, such as $/^&, are not supported.
The next step is to add the password to the appropriate configuration
file, $ACTIVEMQ_HOME/conf/credentials-enc.properties by default.
activemq.username=system
activemq.password=ENC(mYRkg+4Q4hua1kvpCCI2hg==)
guest.password=ENC(Cf3Jf3tM+UrSOoaKU50od5CuBa8rxjoL)
...
jdbc.password=ENC(eeWjNyX6FY8Fjp3E+F6qTytV11bZItDp)
You probably don't even have to rebuilt your image so it contains the appropriate configuration file with encrypted password. You can add it as ConfigMap data to a volume. You can read how to do that here so I'll rather avoid another copy-pasting from documentation. Alternatively you may want to use secret volume. It's not the most important point here as it is just a way of substituting your original ActiveMQ configuration file in your Pod by your custom configuration file and you probably already know how to do that.
There is one more step on ActiveMQ side to configure. This config file can be also passed via ConfigMaP like in the previous example.
Finally, you need to instruct your property loader to encrypt
variables when it loads properties to the memory. Instead of standard
property loader we’ll use the special one (see
\$ACTIVEMQ_HOME/conf/activemq-security.xml) to achieve this.
<bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
<property name="algorithm" value="PBEWithMD5AndDES" />
<property name="passwordEnvName" value="ACTIVEMQ\_ENCRYPTION\_PASSWORD" />
</bean>
<bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
<property name="config" ref="environmentVariablesConfiguration" />
</bean>
<bean id="propertyConfigurer" class="org.jasypt.spring31.properties.EncryptablePropertyPlaceholderConfigurer">
<constructor-arg ref="configurationEncryptor" />
<property name="location" value="file:${activemq.base}/conf/credentials-enc.properties"/>
</bean>
This way we instructed our ActiveMQ to load our encryptor password from the ACTIVEMQ_ENCRYPTION_PASSWORD environment variable and then use it to decrypt passwords from credential-enc.properties file.
Now let's take care about ACTIVEMQ_ENCRYPTION_PASSWORD env var content.
We can set such environment variable in our Pod via Secret. First we need to create one. Then we need to use it as environment variable.
I hope it helps.
It seems like this active mq dockerfile does not provide much in this regard. But it notes that you can specify the location of configuration files on the host system. You would have to prepare these files:
By default data and configuration is stored inside the container and will be lost after the container has been shut down and removed. To persist these files you can mount these directories to directories on your host system:
docker run -p 61616:61616 -p 8161:8161 \
-v /your/persistent/dir/conf:/opt/activemq/conf \
-v /your/persistent/dir/data:/opt/activemq/data \
rmohr/activemq
But maybe you can use a different active mq container implementation? This one seems to provide the credentials configuration via environment variables just like you are using for the other containers: https://hub.docker.com/r/webcenter/activemq
I'm trying to implement a Streaming Sidecar Container logging architecture in Kubernetes using Fluentd.
In a single pod I have:
emptyDir Volume (as log storage)
Application container
Fluent log-forwarder container
Basically, the Application container logs are stored in the shared emptyDir volume. Fluentd log-forwarder container tails this log file in the shared emptyDir volume and forwards it an external log-aggregator.
The Fluentd log-forwarder container uses the following config in td-agent.conf:
<source>
#type tail
tag "#{ENV['TAG_VALUE']}"
path (path to log file in volume)
pos_file /var/log/td-agent/tmp/access.log.pos
format json
time_key time
time_format %iso8601
keep_time_key true
</source>
<match *.*>
#type forward
#id forward_tail
heartbeat_type tcp
<server>
host (server-host-address)
</server>
</match>
I'm using an environment variable to set the tag value so I can change it dynamically e.g. when I have to use this container side-by-side with a different Application container, I don't have to modify this config and rebuild this image again.
Now, I set the environment variable value during pod creation in Kubernetes:
.
.
spec:
containers:
- name: application-pod
image: application-image:1.0
ports:
- containerPort: 1234
volumeMounts:
- name: logvolume
mountPath: /var/log/app
- name: log-forwarder
image: log-forwarder-image:1.0
env:
- name: "TAG_VALUE"
value: "app.service01"
volumeMounts:
- name: logvolume
mountPath: /var/log/app
volumes:
- name: logvolume
emptyDir: {}
After deploying the pod, I found that the tag value in the Fluentd log-forwarder container comes out empty (expected value: "app.service01"). I imagine it's because Fluentd's td-agent initializes first before the TAG_VALUE environment variable gets assigned.
So, the main question is...
How can I dynamically set the td-agent's tag value?
But really, what I'm wondering is:
Is it possible to assign an environment variable before a container's initialization in Kubernetes?
As an answer to your first question (How can I dynamically set the td-agent's tag value?), this seems the best way that you are doing which is defining tag "#{ENV['TAG_VALUE']}" inside fluentd config file.
For your second question, environment variable is assigned before a container's initialization.
So it means it should work and I tested with below sample yaml, and it just worked fine.
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-conf
data:
fluentd.conf.template: |
<source>
#type tail
tag "#{ENV['TAG_VALUE']}"
path /var/log/nginx/access.log
format nginx
</source>
<match *.*>
#type stdout
</match>
---
apiVersion: v1
kind: Pod
metadata:
name: log-forwarder
labels:
purpose: test-fluentd
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: logvolume
mountPath: /var/log/nginx
- name: fluentd
image: fluent/fluentd
env:
- name: "TAG_VALUE"
value: "test.nginx"
- name: "FLUENTD_CONF"
value: "fluentd.conf"
volumeMounts:
- name: fluentd-conf
mountPath: /fluentd/etc
- name: logvolume
mountPath: /var/log/nginx
volumes:
- name: fluentd-conf
configMap:
name: fluentd-conf
items:
- key: fluentd.conf.template
path: fluentd.conf
- name: logvolume
emptyDir: {}
restartPolicy: Never
And when I curl nginx pod, I see this output on fluentd containers stdout.
kubectl logs -f log-forwarder fluentd
2019-03-20 09:50:54.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:55.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:56.000000000 +0000 test.nginx: {"remote":"10.128.0.26","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
As you can see, my environment variable TAG_VALUE=test.nginx has applied to log entries.
I hope it will be useful.
You can use the combo fluent-plugin-kubernetes_metadata_filter and fluent-plugin-rewrite-tag-filter to set container name or something to the tag.