How to mount a secret to kubernetes StatefulSet - kubernetes

So, looking at the Kubernetes API documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps it appears that I can indeed have a volume because it uses a podspec and the podspec does have a volume field, so I could list the secret and then mount it like in a deployment, or any other pod.
The problem is that kubernetes seems to think that volumes are not actually in the podspec for StatefulSet? Is this right? How do I mount in a secret to my statefulset if this is true.
error: error validating "mysql-stateful-set.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
StatefulSet:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi```

The volume field should come inside template spec and not inside container (as done in your template). Refer this for the exact structure (https://godoc.org/k8s.io/api/apps/v1#StatefulSetSpec), go to PodTemplateSpec and you will find volumes field.
Below template should work for you:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi

Related

Traefik returning 404 for local deployment

I'm following this tutorial and making changes as necessary to set up a self hosted instance of Ghost blog. I'm new to Kubernetes, and am self hosting this locally on some Raspberry Pis. I applied all deployments, services, myqsl, secrets, PVCs etc, and added ghost to /etc/hosts. When i visit ghost/ in browser, I get a 404 error. Even though I'm targeting the service. Here are my YAMLs:
MySQL PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
type: longhorn
app: example
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Ghost PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-pv-claim
labels:
type: longhorn
app: ghost
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
MySQL Password Secret
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password: <base_64_encoded_pwd>
Ghost SQL deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
ports:
- port: 3306
selector:
app: ghost
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost-mysql
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: mysql
spec:
containers:
- image: arm64v8/mysql:latest
imagePullPolicy: Always
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
value: ghost
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-vol
mountPath: /var/lib/mysql
volumes:
- name: mysql-vol
persistentVolumeClaim:
claimName: mysql-pv-claim
Ghost Blog Deployment
apiVersion: v1
kind: Service
metadata:
name: ghost-svc
labels:
app: ghost
tier: frontend
spec:
selector:
app: ghost
tier: frontend
ports:
- protocol: TCP
port: 2368
targetPort: 2368
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-deploy
spec:
replicas: 1
selector:
matchLabels:
app: ghost
tier: frontend
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 50
containers:
- name: blog
image: ghost
imagePullPolicy: Always
ports:
- containerPort: 2368
env:
# - name: url
# value: https://www.myblog.com
- name: database__client
value: mysql
- name: database__connection__host
value: ghost-mysql
- name: database__connection__user
value: root
- name: database__connection__password
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: database__connection__database
value: ghost
volumeMounts:
- mountPath: /var/lib/ghost/content
name: ghost-vol
volumes:
- name: ghost-vol
persistentVolumeClaim:
claimName: ghost-pv-claim
Traefik Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ghost-ingress
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`ghost`)
kind: Rule
services:
- name: ghost-svc
port: 80
Added ghost to /etc/hosts (Mac) also.
Not sure what I'm doing wrong but I imagine its certs / ingress related. Any ideas?

Kubernetes error while creating mount source path : file exists

after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi

Kubernetes stateful mongodb: What is right string or how to connect to mongodb of running istance of statefulset which has service also attached?

Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.
Kubernetes version (minikube):
1.20.1
Storage for config:
NFS (working fine, tested)
OS:
Linux Mint 20
YAML CONFIG:
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
I have found few issues in your configuration file. In your service manifest file you use
selector:
role: mongo
But in your statefull set pod template you are using
labels:
app: mongo
One more thing you should use ClusterIP:None to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
To all viewers, working YAML is below
right connection string is: mongodb://mongo:27017/<dbname>
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi

How to mount a configMap as a volume mount in a Stateful Set

I don't see an option to mount a configMap as volume in the statefulset , as per https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulset-v1-apps only PVC can be associated with "StatefulSet" . But PVC does not have option for configMaps.
Here is a minimal example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example
spec:
selector:
matchLabels:
app: example
serviceName: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: nginx:stable-alpine
volumeMounts:
- mountPath: /config
name: example-config
volumes:
- name: example-config
configMap:
name: example-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
a: "1"
b: "2"
In the container, you can find the files a and b under /config, with the contents 1 and 2, respectively.
Some explanation:
You do not need a PVC to mount the configmap as a volume to your pods. PersistentVolumeClaims are persistent drives, which you can read from/write to. An example for their usage is a database, such as Postgres.
ConfigMaps on the other hand are read-only key-value structures that are stored inside Kubernetes (in its etcd store), which are to store the configuration for your application. Their values can be mounted as environment variables or as files, either individually or altogether.
I have done it in this way.
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
namespace: default
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
labels:
component: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
component: rabbitmq
template:
metadata:
labels:
component: rabbitmq
spec:
initContainers:
- name: "rabbitmq-config"
image: busybox:1.32.0
volumeMounts:
- name: rabbitmq-config
mountPath: /tmp/rabbitmq
- name: rabbitmq-config-rw
mountPath: /etc/rabbitmq
command:
- sh
- -c
- cp /tmp/rabbitmq/rabbitmq.conf /etc/rabbitmq/rabbitmq.conf && echo '' >> /etc/rabbitmq/rabbitmq.conf;
cp /tmp/rabbitmq/enabled_plugins /etc/rabbitmq/enabled_plugins
volumes:
- name: rabbitmq-config
configMap:
name: rabbitmq-configmap
optional: false
items:
- key: enabled_plugins
path: "enabled_plugins"
- name: rabbitmq-config-rw
emptyDir: {}
containers:
- name: rabbitmq
image: rabbitmq:3.8.5-management
env:
- name: RABBITMQ_DEFAULT_USER
value: "username"
- name: RABBITMQ_DEFAULT_PASS
value: "password"
- name: RABBITMQ_DEFAULT_VHOST
value: "vhost"
ports:
- containerPort: 15672
name: ui
- containerPort: 5672
name: api
volumeMounts:
- name: rabbitmq-data-pvc
mountPath: /var/lib/rabbitmq/mnesia
volumeClaimTemplates:
- metadata:
name: rabbitmq-data-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
component: rabbitmq
ports:
- protocol: TCP
port: 15672
targetPort: 15672
name: ui
- protocol: TCP
port: 5672
targetPort: 5672
name: api
type: ClusterIP

MongoDB in Kubernetes within GCP

I'm trying to deploy mongodb on my k8s cluster as mongodb is my db of choice. To do that I've config files (very similar to what I did with postgress few weeks ago).
Here's mongo's deployment k8s object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
In order to get into the pod I made a service:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And of cource I need a PVC as well:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
In order to get to the db from my server I used server deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-api
template:
metadata:
labels:
component: panel-admin-api
spec:
containers:
- name: panel-admin-api
image: my-image
ports:
- containerPort: 3001
env:
- name: MONGO_URL
value: panel-admin-mongo-cluster-ip-service // This is important
imagePullSecrets:
- name: gcr-json-key
But for some reason when I'm booting up all containers with kubectl apply command my server says:
MongoDB :: connection error: MongoParseError: Invalid connection string
Can I deploy it like that (as it was possible with postgress)? Or what am I missing here?
Use mongodb:// in front of your panel-admin-mongo-cluster-ip-service
So it should look like this:
mongodb://panel-admin-mongo-cluster-ip-service