I am trying to deploy a PetSet similar to example given in this page.http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
The full yaml -
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
But i need pods to go to specific nodes only. I have already labeled the nodes like -
kubectl label nodes 10.XX.XX.XX node-type=nginx-0
How do i specify nodeSelector in above yaml ?
Add it under the containers spec:
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
nodeSelector:
node-type: nginx-0
Related
I have a stateful set with a volume that uses a subPath: $(POD_NAME) I've also tried $HOSTNAME which also doesn't work. How does one set the subPath of a volumeMount to the name of the pod or the $HOSTNAME?
Here's what I have:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: pltfrmd
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
# ["/bin/sh", "-ec", "while :; do echo '.'; sleep 6 ; done"]
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --log-to-console --config-path /configuration/settings.json
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RAVEN_Logs_Mode
value: Information
ports:
- containerPort: 8080
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
subPath: $(POD_NAME)
- mountPath: /configuration
name: configuration
subPath: ravendb
- mountPath: /certificates
name: certificates
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: certificates
secret:
secretName: ravendb-certificate
- name: configuration
persistentVolumeClaim:
claimName: configuration
- name: data
persistentVolumeClaim:
claimName: ravendb
And the Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: pltfrmd
name: ravendb
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /volumes/ravendb
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: pltfrmd
name: ravendb
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
$HOSTNAME used to work, but doesn't anymore for some reason. Wondering if it's a bug in the host path storage provider?
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort: 80
volumeMounts:
- name: nginx
mountPath: /usr/share/nginx/html
subPath: $(POD_NAME)
volumes:
- name: nginx
configMap:
name: nginx
Ok, so after great experimentation I found a way that still works:
Step one, map an environment variable:
env:
- name: POD_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
This creates $(POD_HOST_NAME) based on the field metadata.name
Then in your mount you do this:
volumeMounts:
- mountPath: /data
name: data
subPathExpr: $(POD_HOST_NAME)
It's important to use subPathExpr as subPath (which worked before) doesn't work. Then it will use the environment variable you created and properly expand it.
after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi
Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.
I am trying to create a StatefulSet. I want to create a file on the attached volume so i am using this command touch /data/test.txt but it seems like the container crashes because of that. Why would it do that? If i don't use the command everything works fine. What are the properties of the /data directory mounted to volume? Like read/write permissions.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /data
args:
- /bin/sh
- -c
- touch /data/test.txt
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Because the default ENTRYPOINT of k8s.gcr.io/nginx-slim:0.8 would be nginx start or something likely.
So, if you want to inject the image, you need to set command
command: ["/bin/sh","-c"]
args:
- |
touch /data/test.txt
And you can kubectl describe or kubectl logs to see what's wrong with your pod/deployment.
I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.