helm install --name testapp ./testapp
Error: release testapp failed: ReplicationController "registry-creds-via-helm" is invalid: spec.template.spec.containers[0].env[0].valueFrom.secretKeyRef.name: Invalid value: "": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Can anyone point out what is the problem with my below yaml?
# cat testapp/templates/replicationController-ngn.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: registry-creds-via-helm
namespace: kube-system
labels:
version: v1.6
spec:
replicas: 1
selector:
name: registry-creds-via-helm
version: v1.9
template:
metadata:
labels:
name: registry-creds-via-helm
version: v1.9
spec:
containers:
- image: upmcenterprises/registry-creds:1.9
name: registry-creds
imagePullPolicy: Always
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: registry-creds
key: AWS_SECRET_ACCESS_KEY
- name: awsaccount
valueFrom:
secretKeyRef:
name: registry-creds
key: aws-account
- name: awsregion
valueFrom:
secretKeyRef:
name: registry-creds
key: aws-region
- name: aws-assume-role
valueFrom:
secretKeyRef:
name: registry-creds
key: aws_assume_role
Related
Below is my app definition that uses azure csi store provider. Unfortunately, this definition throws Error: secret 'my-kv-secrets' not found why is that?
SecretProviderClass
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-app-dev-spc
spec:
provider: azure
secretObjects:
- secretName: my-kv-secrets
type: Opaque
data:
- objectName: DB-HOST
key: DB-HOST
parameters:
keyvaultName: my-kv-name
objects: |
array:
- |
objectName: DB-HOST
objectType: secret
tenantId: "xxxxx-yyyy-zzzz-rrrr-vvvvvvvv"
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
It turned out that secrets store csi works only with volumeMounts. So if you forget to specify it in your yaml definition then it will not work! Below is fix.
Pod
apiVersion: v1
kind: Pod
metadata:
labels:
run: debug
name: debug
spec:
containers:
- args:
- sleep
- 1d
name: debug
image: alpine
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: my-kv-secrets
key: DB-HOST
volumeMounts:
- name: kv-secrets
mountPath: /mnt/kv_secrets
readOnly: true
volumes:
- name: kv-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-app-dev-spc
nodePublishSecretRef:
name: my-sp-secrets
i deplyed my application on kubernetes but have been getting this error:
**MountVolume.SetUp failed for volume "airflow-volume" : mount failed: mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.2.107:10.0.2.24,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/airflow-volume/worker-844c9db787-vprt8-glusterfs.log,log-level=ERROR 10.0.2.107:/airflow /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume Output: Running scope as unit run-22059.scope. mount: /var/lib/kubelet/pods/4a3c3d0b-b7e8-49bc-8a78-5a8bdc932eca/volumes/kubernetes.io~glusterfs/airflow-volume: unknown filesystem type 'glusterfs'. , the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod worker-844c9db787-vprt8**
AND
**Unable to attach or mount volumes: unmounted volumes=[airflow-volume], unattached volumes=[airflow-volume default-token-s6pvd]: timed out waiting for the condition**
Any suggestions?
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: web
template:
metadata:
labels:
app: airflow
tier: web
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
# Airflow Webserver Container
- name: web
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
ports:
- name: web
containerPort: 8080
args: ["webserver"]
# Airflow Scheduler Container
- name: scheduler
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: AWS_DEFAULT_REGION
value: us-east-1
- name: ETL_AWS_ACCOUNT_NUMBER
valueFrom:
secretKeyRef:
name: aws-creds
key: ETL_AWS_ACCOUNT_NUMBER
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
args: ["scheduler"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flower
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: flower
template:
metadata:
labels:
app: airflow
tier: flower
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
- name: flower
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
# To prevent the error: ValueError: invalid literal for int() with base 10: 'tcp://10.0.0.83:5555'
- name: FLOWER_PORT
value: "5555"
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
ports:
- name: flower
containerPort: 5555
args: ["flower"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: worker
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
tier: worker
template:
metadata:
labels:
app: airflow
tier: worker
spec:
imagePullSecrets:
- name: peeriqregistrykey
restartPolicy: Always
containers:
- name: worker
image: peeriq/data_availability_service:airflow-metadata-cutover
volumeMounts:
- mountPath: /usr/local/airflow
name: airflow-volume
envFrom:
- configMapRef:
name: airflow-config
env:
- name: AWS_DEFAULT_REGION
value: us-east-1
- name: ETL_AWS_ACCOUNT_NUMBER
valueFrom:
secretKeyRef:
name: aws-creds
key: ETL_AWS_ACCOUNT_NUMBER
- name: VAULT_ADDR
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_ADDR
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
name: vault-credentials
key: VAULT_TOKEN
- name: DJANGO_AUTH_USER
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_USER
- name: DJANGO_AUTH_PASS
valueFrom:
secretKeyRef:
name: django-auth
key: DJANGO_AUTH_PASS
- name: FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: FERNET_KEY
- name: POSTGRES_SERVICE_HOST
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_SERVICE_HOST
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rds-postgres
key: POSTGRES_PASSWORD
args: ["worker"]
volumes:
- name: airflow-volume
# This GlusterFS volume must already exist.
glusterfs:
endpoints: glusterfs-cluster
path: /airflow
readOnly: false
You must install package glusterfs-fuse on your Kubernetes nodes, otherwise it won't be able to mount glusterfs volumes.
The part of the message unknown filesystem type 'glusterfs' can mean that there is something wrong with your volume definition or a storage class if you use it. But this is a guess.
I had the same error, and the reason in my kubernetes was that the nfs server was unavailable. After starting the nfs server, it was solved.
I am trying to run a cron job in kubernetes that needs to access a database. This is the database yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: db
name: db
spec:
selector:
matchLabels:
component: db
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
component: db
spec:
containers:
- name: db
image: mysql:5.7
ports:
- containerPort: 3306
args:
- --transaction-isolation=READ-COMMITTED
- --binlog-format=ROW
- --max-connections=1000
- --bind-address=0.0.0.0
env:
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: MYSQL_DATABASE
name: db-secrets
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: MYSQL_PASSWORD
name: db-secrets
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: MYSQL_ROOT_PASSWORD
name: db-secrets
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: MYSQL_USER
name: db-secrets
volumeMounts:
- mountPath: /var/lib/mysql
name: db-persistent-storage
restartPolicy: Always
volumes:
- name: db-persistent-storage
persistentVolumeClaim:
claimName: db-pvc
And this is the yaml for the cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cron
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cron
image: iulbricht/shopware-status-tool:1.0.0
env:
- name: USERNAME
valueFrom:
secretKeyRef:
key: USERNAME
name: cron-secrets
- name: PASSWORD
valueFrom:
secretKeyRef:
key: PASSWORD
name: cron-secrets
- name: DATABASE_DSN
valueFrom:
secretKeyRef:
key: DATABASE_DSN
name: cron-secrets
- name: DHL_API_KEY
valueFrom:
secretKeyRef:
key: DHL_API_KEY
name: cron-secrets
- name: SHOP_API
valueFrom:
secretKeyRef:
key: SHOP_API
name: cron-secrets
restartPolicy: OnFailure
When the cronjob runs I always get the following message: default addr for network 'db:3306' unknown. The mysql connection string is as follows: mysql://username:password#db:3306/shopware
I am using Kustomization and the db and cron are in the save namespace.
Can anyone help me find a way to solve this?
Can you please try this connection string
username:password#tcp(db:3306)/shopware
I have the sample cm.yml for configMap with nested json like data.
kind: ConfigMap
metadata:
name: sample-cm
data:
spring: |-
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
I have to set environment variables, spring-rabbitmq-host=sample.com and spring-datasource-url= jdbc:postgresql:sampleDb in the following pod.
kind: Pod
metadata:
name: pod-sample
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
Unfortunately it won't be possible to pass values from the configmap you created as separate environment variables because it is read as a single string.
You can check it using kubectl describe cm sample-cm
Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"spring":"rabbitmq: |-\n host: \"sample.com\"\ndatasource: |-\n url: \"jdbc:postgresql:sampleDb\""},"kind":"Con...
Data
====
spring:
----
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
Events: <none>
ConfigMap needs key-value pairs so you have to modify it to represent separate values.
Simplest approach would be:
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-cm
data:
host: "sample.com"
url: "jdbc:postgresql:sampleDb"
so the values will look like this:
kubectl describe cm sample-cm
Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"host":"sample.com","url":"jdbc:postgresql:sampleDb"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"s...
Data
====
host:
----
sample.com
url:
----
jdbc:postgresql:sampleDb
Events: <none>
and pass it to a pod:
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: host
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: url
I have 4 Kubernetes/Helm deployments (web, emailworker, jobworker, sync) which all need to share exactly the same spec.template.spec.containers[].env key. The env keys are quite large and I'd like to avoid copy/pasting it in each deployment, e.g.:
# ...
env:
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
# ...
Is this possible to achieve with either yaml, Helm or Kubernetes?
So I found a solution with Helm named templates: https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/named_templates.md
I created a file templates/_env.yaml with the following content:
{{ define "env" }}
- name: NODE_ENV
value: "{{ .Values.node_env }}"
- name: BASEURL
value: "{{ .Values.base_url }}"
- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: secret-redis
key: host
- name: KUE_PREFIX
value: "{{ .Values.kue_prefix }}"
- name: DATABASE_NAME
value: "{{ .Values.database_name }}"
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: secret-postgres
key: host
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: secret-postgres
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: secret-postgres
key: password
- name: AWS_KEY
valueFrom:
secretKeyRef:
name: secret-bucket
key: key
- name: AWS_SECRET
valueFrom:
secretKeyRef:
name: secret-bucket
key: secret
- name: AWS_S3_BUCKET
valueFrom:
secretKeyRef:
name: secret-bucket
key: bucket
- name: AWS_S3_ENDPOINT
value: "{{ .Values.s3_endpoint }}"
- name: INSTAGRAM_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret-instagram
key: clientID
{{ end }}
And here's how I use it in a templates/deployment.yaml files:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: somedeployment
# ...
spec:
template:
# ...
metadata:
name: somedeployment
spec:
# ...
containers:
- name: container-name
image: someimage
# ...
env:
{{- template "env" . }}
Have a look at ConfigMap. That allows configuration to be collected together in one resource and used in multiple deployments.
https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
No need to mess around with any templates.