Value of Kubernetes secret in environment variable seems incorrect - kubernetes

I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.
I'm passing in these details using two methods - a ConfigMap and a Secret. The username (DB_USERNAME) and connection url (DB_URL) are passed via a ConfigMap, while the DB password is passed in as a secret (DB_PASSWORD).
My issue is that while the values passed via ConfigMap are fine, the DB_PASSWORD from the secret appears jumbled - like there's some encoding issue (see image below).
My deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
My ConfigMap and Secret yaml
apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
Not sure if I'm missing something in my Secret definition?

The secret value should be base64 encoded. Instead of test, use the output of
echo -n 'test' | base64
P.S. the Secret's type should be Opaque, not Generic

Related

Kubernetes: set environment variables from file?

My Kubernetes deployment has an initContainer which fetches a token from a URL. My app container (3rd party) then needs that token as an environment variable.
A possible approach would be: the initContainer creates a Kubernetes Secret with the token value; the app container uses the secret as an environment variable via env[].valueFrom.secretKeyRef.
Creating the Secret from the initContainer requires accessing the Kubernetes API from a Pod though, which tends to be a tad cumbersome. For example, directly accessing the REST API requires granting proper permissions to the pod's service account; otherwise, creating the secret will fail with
secrets is forbidden: User \"system:serviceaccount:default:default\"
cannot create resource \"secrets\" in API group \"\" in the namespace \"default\"
So I was wondering, isn't there any way to just write the token to a file on an emptyDir volume...something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
initContainers:
- name: fetch-auth-token
image: curlimages/curl
command:
- /bin/sh
args:
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url > /auth-token/token
volumeMounts:
- mountPath: /auth-token
name: auth-token
...
volumes:
- name: auth-token
emptyDir: {}
... and then somehow use that file to populate an environment variable in the app container, similar to env[].valueFrom.secretKeyRef, along the lines of:
containers:
- name: my-actual-app
image: thirdpartyappimage
env:
- name: token
valueFrom:
fileRef:
path: /auth-token/token
# ^^^^ this does not exist
volumeMounts:
- mountPath: /auth-token
name: auth-token
Unfortunately, there's no env[].valueFrom.fileRef.
I considered overwriting the app container's command with a shell script which loads the environment variable from the file before launching the main command; however, the container image doesn't even contain a shell.
Is there any way to set the environment variable in the app container from a file?
Creating the Secret from the initContainer requires accessing the Kubernetes API from a Pod though, which tends to be a tad cumbersome...
It's not actually all that bad; you only need to add a ServiceAccount, Role, and RoleBinding to your deployment manifests.
The ServiceAccount manifest is minimal, and you only need it if you don't want to grant permissions to the default service account in your namespace:
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
Then your Role grants access to secrets (we need create and delete permissions, and having get and list is handy for debugging):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: env-example
name: secretmaker
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- delete
- list
A RoleBinding connects the ServiceAccount to the Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: env-example
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
And with those permissions in place, the Deployment is relatively simple:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-example
name: env-example
namespace: env-example
spec:
selector:
matchLabels:
app: env-example
template:
metadata:
labels:
app: env-example
spec:
serviceAccountName: secretmaker
initContainers:
- command:
- /bin/sh
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url -o /tmp/authtoken
kubectl delete secret authtoken > /dev/null 2>&1
kubectl create secret generic authtoken --from-file=AUTH_TOKEN=/tmp/authtoken
image: docker.io/alpine/k8s:1.25.6
name: create-auth-token
containers:
- name: my-actual-app
image: docker.io/alpine/k8s:1.25.6
command:
- sleep
- inf
envFrom:
- secretRef:
name: authtoken
The application container here is a no-op that runs sleep inf; that gives you the opportunity to inspect the environment by running:
kubectl exec -it deployment/env-example -- env
Look for the AUTH_TOKEN variable created by our initContainer.
All the manifests mentioned here can be found in this repository.

Define unique IDs for wso2.carbon: in deployment.yaml || Kubernetes deployment

Requirement:
Need to assign two different id's for each pod in deployment.yaml for id parameter, wso2.carbon section.
i.e. wso2-am-analytics_1 and wso2-am-analytic_2
SetUp:
This deployment is a kubernetes deployment deployment.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: wso2-am-analytics
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
.
.
.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wso2apim-mnt-worker
namespace: wso2
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
deployment: wso2apim-mnt-worker
.
.
.
You don't have to necessarily have the ID as wso2-am-analytics_1 and wso2-am-analytics_2. This just has to be a unique value. Hence you can use something like the pod IP for this. If you are strict about the ID value you can create the config map based on a config file and then have some logic to populate the config files' ID field appropriately. If you use helm this would be pretty easy.
If you are ok to use some other unique value you can do the following. In WSO2 configs you can read values from environment variables, hence you can do something like this.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: ${NODE_ID}
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
# Then in the Deployment pass environment variable
env:
-
name: NODE_ID
valueFrom:
fieldRef:
fieldPath: status.podIP

Kubernetes env variable not attached via PodDefault

I am working in kubeflow notebook server. I need to add some configurations which are environment variables. So that, I have decided create the configmap and the PodDefault.
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
namespace: app
data:
PLACE: /auth
USERNAME: root
PASSWORD: l3tm3in
This is my configmap file. I have attached this file in PodDefault object using below syntax
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
envFrom:
- configMapRef:
name: test-configmap
Actually the values are coming kubeflow configuration section. But it's not attached in the notebook(Pod)
Could anyone know about how to fix this issue?
Thanks in advance
I have never used kubeflow but based on the sourcecode, this should be the solution:
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: test-configmap
namespace: app
spec:
selector:
matchLabels:
test-configmap: "true"
desc: "Test Configmap"
containers:
- envFrom:
- configMapRef:
name: test-configmap

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.

Kubernetes - Passing in environment variable and service name (from DNS)

I can't seem to find an example of the correct syntax for inserting a environment variable along with the service name:
So I have a service defined as:
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: test
I then use a secrets file with the following:
apiVersion: v1
kind: Secret
metadata:
name: test
labels:
app: test
data:
password: fxxxxxxxxxxxxxxx787xx==
And just to confirm I'm using envFrom to set that password as an env variable:
apiVersion: v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: xxxxxxxxxxx
imagePullPolicy: Always
envFrom:
- configMapRef:
name: test
- secretRef:
name: test
ports:
- containerPort: 3000
Now in my config file I want to refer to that password as well as the service name itself - is this the correct way to do so:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:${password}#test"
The yaml configuration does not work the way you provided as an example.
If you want to setup Kubernetes with a complex configuration and
use variables or dynamic assignment to some of them, you have to
use an external parser to replace variable place holders. I use
bash and sed to accomplish it. I changed your config a bit:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
labels:
app: test
data:
WORKING_URI: "http://somedomain:VAR_PASSWORD#test"
After saving, I created a simple shell script containing desired values.
#!/bin/sh
export PASSWORD="verysecretpwd"
cat deploy.yaml | sed "s/VAR_PASSWORD/$PASSWORD/g" | kubectl -f -