not load secret in k8s - kubernetes

I am learning to use k8s and I have a problem. I have been able to perform several deployments with the same yml without problems. My problem is that when I mount the secret volume it loads me the directory with the variables but it does not detect them as environments variable
my secret
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
- name: secret-volumen
mountPath: /etc/secret/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
- name: secret-volumen
secret:
secretName: authentications-sercret
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: undefined, <-- not load
PASSWORD: undefined,<-- not load
HOST: 'db-service',
PORT: '5432'
}
if I add them manually if it recognizes them
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_PASSWORD
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: 'insertmendoza', <-- work
PASSWORD: 'jKNP9ZdtELMm6K5', <-- work
HOST: 'db-service',
PORT: '5432'
}
listening queue
listening on *:8000
in the directory where I mount the secrets exist!
/etc/secret # ls
DB_PASSWORD DB_USERNAME SECRET_KEY TOKEN_EXPIRES_IN
/etc/secret # cat DB_PASSWORD
jKNP9ZdtELMm6K5/etc/secret #
EDIT
My solution speed is
envFrom:
- configMapRef:
name: authentications-config
- secretRef: <<--
name: authentications-sercret <<--
I hope it serves you, greetings from Argentina Insert Mendoza

If I understand the problem correctly, you aren't getting the secrets loaded into the environment. It looks like you're loading it incorrectly, use the envFrom form as documented here.
Using your example it would be:
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
- secretRef:
name: authentications-sercret
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
Note the volume and mount was removed and just add the secretRef section. Those should now be exported as environment variables in your pod.

Related

configuring keycloak with external postgres database

How do we configure keycloak to use the external postgres (AWS RDS)?
We deployed it in kubernetes using quarkus distro and update dthe DB env variables in our deployment.yaml , however it is still taking the local h2 data base and not the postgres.
For better understanding providing the deployment.yaml file we are using:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2022-06-21T16:47:29Z"
generation: 5
labels:
app: keycloak
name: keycloak
namespace: kc***
resourceVersion: "29233550"
uid: 3634683e-657c-4278-9002-82a3ce64b968
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: keycloak
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keycloak
spec:
containers:
- args:
- start
- --hostname=kc-test.k8.com
- --https-certificate-file=/opt/pem/cert-pem/cert.pem
- --https-certificate-key-file=/opt/pem/key-pem/key.pem
- --log-level=DEBUG
env:
- name: KEYCLOAK_ADMIN
value: ****
- name: KEYCLOAK_ADMIN_PASSWORD
value: *****
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: jdbc:postgresql://database.c**7irl*****.us-east-1.rds.amazonaws.com/database
- name: DB_DATABASE
value: ****
- name: DB_USER
value: postgres
- name: DB_SCHEMA
value: public
- name: DB_VENDOR
value: POSTGRES
- name: JGROUPS_DISCOVERY_PROTOCOL
value: dns.DNS_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: dns_query=keycloak
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
image: quay.io/keycloak/keycloak:17.0.0
imagePullPolicy: IfNotPresent
name: keycloak
ports:
- containerPort: 7600
name: jgroups
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /realms/master
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/pem/key-pem
name: key-pem
- mountPath: /opt/pem/cert-pem
name: cert-pem
- mountPath: /opt/keycloak/data
name: keydata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: key-pem
name: key-pem
- configMap:
defaultMode: 420
name: cert-pem
name: cert-pem
- emptyDir: {}
name: keydata
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-21T18:02:32Z"
lastUpdateTime: "2022-06-21T18:02:32Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-21T18:01:53Z"
lastUpdateTime: "2022-06-21T18:16:41Z"
message: ReplicaSet "keycloak-5c84476694" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 5
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Is your external DB also in same namespace?
if yes you can use below way.
<external postgres (AWS RDS)>secret-name in k8s secret contains all the below details.
Using this method it will dynamically fetch details from secret.
env:
- name: DB_DATABASE
valueFrom:
secretKeyRef:
name: database-secret-name
key: dbname
- name: DB_ADDR
valueFrom:
secretKeyRef:
name: database-secret-name
key: host
- name: DB_PORT
valueFrom:
secretKeyRef:
name: database-secret-name
key: port
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret-name
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: database-secret-name
key: user
if your external postgres in different namespace, copy you database secret to keycloak namespace and give it a try.
DB_ADDR is env variable for Keycloak versions 16-. Use doc for you Keycloak version https://www.keycloak.org/server/all-config
Keycloak 17+ has KC_DB_URL:
db-url
The full database JDBC URL.
If not provided, a default URL is set based on the selected database vendor. For instance, if using 'postgres', the default JDBC URL would be 'jdbc:postgresql://localhost/keycloak'.
CLI: --db-url
Env: KC_DB_URL
Of course configure also other env variables for your Keycloak version properly.

azure secrets store csi driver provider throws Error: secret * not found

Below is my app definition that uses azure csi store provider. Unfortunately, this definition throws Error: secret 'my-kv-secrets' not found why is that?
SecretProviderClass
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-app-dev-spc
spec:
provider: azure
secretObjects:
- secretName: my-kv-secrets
type: Opaque
data:
- objectName: DB-HOST
key: DB-HOST
parameters:
keyvaultName: my-kv-name
objects: |
array:
- |
objectName: DB-HOST
objectType: secret
tenantId: "xxxxx-yyyy-zzzz-rrrr-vvvvvvvv"
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
It turned out that secrets store csi works only with volumeMounts. So if you forget to specify it in your yaml definition then it will not work! Below is fix.
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumeMounts:
- name: kv-secrets
mountPath: /mnt/kv_secrets
readOnly: true
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets

Failed to connect mongo-express to mongoDb in k8s

I configured mongodb with user name and password, and deployed mongoDb and mongoDb express.
The problem is that I'm getting the following error in mongo-express logs:
Could not connect to database using connectionString: mongodb://username:password#mongodb://lc-mongodb-service:27017:27017/"
I can see that the connection string contains 27017 port twice, and also "mongodb://" in the middle that should not be there.
This is my mongo-express deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lc-mongo-express
labels:
app: lc-mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: lc-mongo-express
template:
metadata:
labels:
app: lc-mongo-express
spec:
containers:
- name: lc-mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: lc-configmap
key: DATABASE_URL
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongo-express-service
spec:
selector:
app: lc-mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And my mongoDb deployment:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lc-mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
storageClassName: gp2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lc-mongodb
labels:
app: lc-mongodb
spec:
replicas: 1
serviceName: lc-mongodb-service
selector:
matchLabels:
app: lc-mongodb
template:
metadata:
labels:
app: lc-mongodb
spec:
volumes:
- name: lc-mongodb-storage
persistentVolumeClaim:
claimName: lc-mongodb-pvc
containers:
- name: lc-mongodb
image: "mongo"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
command:
- mongod
- --auth
volumeMounts:
- mountPath: '/data/db'
name: lc-mongodb-storage
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongodb-service
labels:
name: lc-mongodb
spec:
selector:
app: lc-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
What am I doing wrong?
Your connection string format is wrong
You should be trying out something like
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
Now suppose if you are using the Node js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<username>:<password>#<Mongo service Name>/<Database name>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
// creating collection
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
also you missing the Db path args: ["--dbpath","/data/db"] in command while using the PVC and configuring the path
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"

Kubernetes : Invalid value: " **** ": error converting fieldPath: field label not supported

I am trying to apply below deployment, but I get below error
The Deployment "example" is invalid: spec.template.spec.containers[0].env[0].valueFrom.fieldRef.fieldPath: Invalid value: "spec.template.metadata.annotations.configHash": error converting fieldPath: field label not supported: spec.template.metadata.annotations.configHash
I have tried different ways of accessing the fieldPath like:
spec.template.metadata.annotations['configHash']
spec.template.metadata.['annotations'].['configHash']
spec.template.metadata.['annotations'['configHash']]
Nothing seems to work . Any help will be appreciated.
Kubernetes - 1.16.8-gke.15
apiVersion: apps/v1
kind: Deployment
metadata:
name: ecc-web
labels:
app: ecc
spec:
replicas: 1
selector:
matchLabels:
app: ecc
template:
metadata:
labels:
app: ecc
annotations:
configHash: b6651e50d35182bd8fc2f75a5af4aca79387079860fb953896399a1ad16e317d
spec:
volumes:
- name: opt-ecc-logs
emptyDir: {}
securityContext:
fsGroup: 1000
containers:
- name: ecc-web
image: gcr.io/gke-nonprod/ecc:release-11
envFrom:
- configMapRef:
name: env-config
env:
- name: CONFIG_HASH
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.annotations.configHash
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: ecc-secret
key: mysql_svcacct_ecc_dev_password
ports:
- containerPort: 80
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: opt-ecc-logs
mountPath: /opt/ecc/logs
- name: application-log
image: busybox
command: ["/bin/sh","-c"]
args: ["touch /opt/ecc/logs/application.log;chown -R wsapp:wsapp /opt/ecc/logs/;tail -n+1 -f /opt/ecc/logs/application.log"]
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: opt-ecc-logs
mountPath: /opt/ecc/logs
Just use:
env:
- name: CONFIG_HASH
valueFrom:
fieldRef:
fieldPath: metadata.annotations['configHash']
Instead of spec.template.metadata.annotations.configHash

Kubernetes Helm Chart - Debugging

I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}