Using kubernetes secret env var inside another env var - kubernetes

I have a secret being used as env var in another env var as follows:
- name: "PWD"
valueFrom:
secretKeyRef:
name: "credentials"
key: "password"
- name: HOST
value: "xyz.mongodb.net"
- name: MONGODB_URI
value: "mongodb+srv://user:$(PWD)#$(HOST)/db_name?"
When i exec into the container and run env command to see the values of env i see -
mongodb+srv://user:password123
#xyz.mongodb.net/db_name?
The container logs show error as authentication failure.
Is this something that is expected to work in kubernetes ? There docs talk about dependent env vars but do not give example using secrets. Did not find clear explanation on this after extensive search. Only found this one article doing something similar.
Some points to note -
The secret is a sealed secret.
This is the final manifest's contents, but all this is templated using helm.
The value is being used inside a spring boot application
Is the new line after 123 expected ?
If this evaluation of env from a secret in another env is possible then what am I doing wrong here ?

The issue was with the command used to encode the secret - echo "pasword" | base64. The echo adds a newline character at the end of the string. Using echo -n "password" | base64 fixed the secret.
Closing the issue.

Related

Parameter name containing special characters on Helm chart

In my Helm chart, I need to set the following Java Spring parameter name:
company.sms.security.password#id(name):
secret:
name: mypasswd
key: mysecretkey
But when applying the template, I encounter a syntax issue.
oc apply -f template.yml
The Deployment "template" is invalid: spec.template.spec.containers[0].env[79].name: Invalid value: "company.sms.security.password#id(name)": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
What I would usually do is defining this variable at runtime like this:
JAVA_TOOL_OPTIONS:
-Dcompany.sms.security.password#id(name)=mypass
But since it's storing sensitive data, obviously I cannot log in clear the password.
So far I could only think about defining an Initcontainer as a workaround, changing the parameter name is not an option.
Edit: So the goal is to not log the password neither in the manifest nor in the application logs.
Edited:
Assign the value from your secret to one environment variable, and use it in the JAVA_TOOL_OPTIONS environment variable value. the way to expand the value of a previously defined variable VAR_NAME, is $(VAR_NAME).
For example:
- name: MY_PASSWORD
valueFrom:
secretKeyRef:
name: mypasswd
key: mysecretkey
- name: JAVA_TOOL_OPTIONS
value: "-Dcompany.sms.security.password#id(name)=$(MY_PASSWORD)"
Constrains
There are some conditions for kuberenetes in order to parse the $(VAR_NAME) correctly, otherwise $(VAR_NAME) will be parsed as a regular string:
The variable VAR_NAME should be defined before the one that uses it
The value of VAR_NAME must not be another variable, and must be defined.
If the value of VAR_NAME consists of other variables or is undefined, $(VAR_NAME) will be parsed as a string.
In the example above, if the secret mypasswd in the pod's namespace doesn't have a value for the key mysecretkey, $(MY_PASSWORD) will appear literally as a string and will not be parsed.
References:
Dependent environment variables
Use secret data in environment variables

Error when try apply configmap to auth with EKS cluster

i have the follow question. i try connect to eks cluster using a Terraform with Gitlab CI/CD , i receive the error message , but when try it in my compute , this error dont appear, someone had same error ?
$ terraform output authconfig > authconfig.yaml
$ cat authconfig.yaml
<<EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: "arn:aws:iam::503655390180:role/clusters-production-workers"
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOT
$ kubectl create -f authconfig.yaml -n kube-system
error: error parsing authconfig.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
The output is including EOT(EndOfText) marks since it is generated as a multiline string originally.
as documentation suggests (terrafom doc link)
Don't use "heredoc" strings to generate JSON or YAML. Instead, use the
jsonencode function or the yamlencode function so that Terraform can
be responsible for guaranteeing valid JSON or YAML syntax.
use json encoding or yaml encoding before building output.
If you want to continue like this with what you have now then try to give these options with output -json or -raw
terraform output -json authconfig > authconfig.yaml
or
terraform output -raw authconfig > authconfig.yaml
The error message tells you the authconfig.yaml file can not be converted from YAML to JSON, suggesting it's not a valid yaml
The cat authconfig.yaml you're showing us includes some <<EOT and EOT tags. I would suggest to remove those, before running kubectl create -f
Your comment suggests you knew this already - then why didn't you ask about terraform, rather than showing us kubectl create failing? From your post, it really sounded like you copy/pasted the output of your job, without even reading it.
So, obviously, the next step is to terraform output -raw, or -json, there are several mentions in their docs, or knowledge base, a google search would point you to:
https://discuss.hashicorp.com/t/terraform-outputs-with-heredoc-syntax-leaves-eot-in-file/18584/7
https://www.terraform.io/docs/cli/commands/output.html
Last: we could ask why? Why would you terraform output > something, when you can have terraform write a file?
While as a general rule, whenever writing terraform stdout/stderr to files, I strongly suggest going with no-color.

MIx environment variables from kubernetes deployment and docker image

I am trying to pass an environment variable in my deployment that should define a prefix based on a version number:
env:
- name: INDEX_PREFIX
value: myapp-$(VERSION)
$(VERSION) is not defined in my deployment but is set in the docker image used by the pod.
I tried to use both $() and ${} but VERSION is not interpolated in the environment of my pod. In my pod shell doing export TEST=myapp-${VERSION} does work though.
Is there any way to achieve what I am looking for? ie setting an environment variable in my deployment that reference an environment variable set in the docker image?
VERSION is an environment variable of the docker image. So you can assign it a value either inside the container or by passing
env:
- name : VERSION
value : YOUR-VALUE
In your case, VERSION is either set by a script inside the docker container or in the Dockerfile.
You can do :
In the Dockerfile, adding ENV INDEX_PREFIX myapp-${VERSION}
Adding a script to your entrypoint as
export INDEX_PREFIX=myapp-${VERSION}
In case you can't modify Dockerfile, you can try to :
Get the image entrypoint file from the docker image (ie: /IMAGE-entrypoint.sh) and the image args(ie: IMAGE-ARGS). you can use docker inspect IMAGE.
Override the container command and args in the pod spec using a script.
command:
- '/bin/sh'
args:
- '-c'
- |
set -e
set -x
export INDEX_PREFIX=myapp-${VERSION}
IMAGE-entrypoint.sh IMAGE-ARGS
k8s documentation : https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
Hope it could help you.

Kubernetes|Helm values.yaml - How to access array using dynamic index

I have a values.yaml where I need to mention multiple ports like the following:
kafkaClientPort:
- 32000
- 32001
- 32002
In yaml for statefulset, I need to get value using ordinal number.
So for kf-0, I need to put first element of kafkaClientPort; and for kf-1, second element and so on.
I am trying like the following:
args:
- "KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(MY_NODE_NAME):{{ index .Values.kafkaClientPort ${HOSTNAME##*-} }}"
But it is showing an error.
Please advise what is the best way to access dynamically values.yaml value.
The trick here is that Helm template doesn't know anything about ordinal in your stateful set. If you look at the Kafka Helm Chart, you see that they are using a base port 31090 and then they add the ordinal number but that substitution is in place 'after' the template is created. Something like this in your values:
"advertised.listener": |-
PLAINTEXT://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
and then in the template file, the use a bash export under command with a printf which is an alias for fmt.Sprintf. Something like this in your case:
command:
- sh
- -exc
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export "KAFKA_ADVERTISED_LISTENERS={{ printf "%s" $advertised.listener }} \\
...

Copy file to ansible host with custom variables substituted

I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like
- name: "Provide response file"
copy: src=/custom.rsp dest=/opt/oracle
Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this:
- name: "Substitute Vars"
shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp"
I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?
You want to be using a template rather than copying a static file.
Also, when using the copy or template modules, the dest parameter is a full path AND filename, not just a path. So if you want to end up with a copy of custom.rsp in the directory /opt/oracle then you need to do this:
- name: "Provide response file"
template: src=/custom.rsp dest=/opt/oracle/custom.rsp
I'm going to extend Bruce's answer with an example:
This is part of my inventory.yaml:
kafka_stage:
children:
kafka_with_zookeeper_stage:
kafka_only_stage:
vars:
zookeeper_hosts: "kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181"
kafka_with_zookeeper_stage:
hosts:
kafka-stage01:
broker_id: 0
kafka-stage02:
broker_id: 1
vars:
services:
kafka:
zookeeper:
This is part of a configuration file:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id={{ broker_id }}
# {{ zookeeper_hosts }}
advertised.listeners=PLAINTEXT://{{ ansible_host }}:9092
# {{ services }}
This command in a playbook:
- name: Copy to Host
ansible.builtin.template:
src: my_configfile.properties
dest: /tmp/hejsan.properties
Gave me this on the remote host kafka-stage02:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181
advertised.listeners=PLAINTEXT://kafka-stage02:9092
# {'kafka': None, 'zookeeper': None}