How to update a storageClassName in a statefulset? - kubernetes

I am new to k8's and trying to update storageClassName in a StatefulSet.(from default to default-t1 only change in the yaml)
I tried running kubectl apply -f test.yaml
The only difference between 1st and 2nd Yaml(one using to apply update) is storageClassName: default-t1 instead of default
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: default
resources:
requests:
storage: 1Gi
Every-time I try to update it I get The StatefulSet "web" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
What am I missing or what steps should I take to do this?

There's no easy way as far as I know but it's possible.
Save your current StatefulSet configuration to yaml file:
kubectl get statefulset some-statefulset -o yaml > statefulset.yaml
Change storageClassName in volumeClaimTemplate section of StatefulSet configuration saved in yaml file
Delete StatefulSet without removing pods using:
kubectl delete statefulset some-statefulset --cascade=orphan
Recreate StatefulSet with changed StorageClass:
kubectl apply -f statefulset.yaml
For each Pod in StatefulSet, first delete its PVC (it will get stuck in terminating state until you delete the pod) and then delete the Pod itself
After deleting each Pod, StatefulSet will recreate a Pod (and since there is no PVC) also a new PVC for that Pod using changed StorageClass defined in StatefulSet.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: <storage class name>
resources:
requests:
storage: 1Gi
in the above file add storage class and you can apply it like.
kubectl apply -f .yaml

First line of StatefulSet shoud define an "apiVersion", you can see example here statefulset. Please add it in first line:
apiVersion: apps/v1
Could you show me output of your 'www' PVC file?
kubectl get pvc www -o yaml
In PVC you have field "storageClassName", which should be set on your StorageClass you want to use, so in your case it would be:
storageClassName: default-t1

Related

Having trouble deploying database to kubernetes cluster

I am able to deploy the database service itself, but when I try to deploy with a persistent volume claim as well, the deployment silently fails. Below is the deployment.yaml file I am using. The service deploys fine if I remove the first 14 lines that define the persistent volume claim.
apiVersion: apps/v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: timescale-pvc-1
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: timescale
spec:
selector:
matchLabels:
app: timescale
replicas: 1
template:
metadata:
labels:
app: timescale
spec:
containers:
- name: timescale
image: timescale/timescaledb:2.3.0-pg11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "password"
- name: POSTGRES_DB
value: "metrics"
volumes:
- name: timescaledb-pv
persistentVolumeClaim:
claimName: timescale-pvc-1
Consider StatefulSet for running stateful apps like databases. Deployment is preferred for stateless services.
You are using the below storage class in the PVC.
storageClassName: standard
Ensure the storage class supports dynamic storage provisioning.
Are you creating a PV along with PVC and Deployment? A Deployment, Stateful set or a Pod can only use PVC if there is a PV available.
If you're creating the PV as well then there's a possibility of a different issue. Please share the logs of your Deployment and PVC

How to sync a folder in kubernetes statefulset

We are trying to create a solution where we want to replicate changes made to a folder inside any of the pods of the Statefulset. Any file changes inside that folder on any POD should also reflect in other pods. Is there a sidecar solution for this requirement? Because we know that Statefulset will create separate PVs for each POD and there won't be any common mount across the pods of the Statefulset.
You can try using the NFS or file system like EFS using that you will be able to implement the ReadWritemany.
For ref Azure File.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: statefulset-azurefile
labels:
k8s-app: nginx
version: v1
spec:
serviceName: statefulset-azurefile
replicas: 1
template:
metadata:
labels:
k8s-app: nginx
version: v1
spec:
containers:
- name: statefulset-azurefile
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /mnt/azurefile
volumeClaimTemplates:
- metadata:
name: persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: azurefile
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
Demo : https://github.com/andyzhangx/demo/tree/master/linux/statefulset
If volumeClaimTemplates not work as expected use the persistentVolumeClaim
Article to read about ReadWriteMany access Mode :https://learn.microsoft.com/en-us/azure/aks/azure-files-volume
If you are on other cloud providers like GCP, AWS, Oracle(OCI) provides different file services.
GCP- Filestore
AWS- EFS
OCI- Filestorage
OCI article if you want to explore : https://enabling-cloud.github.io/oci-learning/manual/StaticPersistentVolumeOnOCI.html

Kubernetes PersistentVolume issues in GCP ( Deployment stuck in pending )

Hi i'm trying to create a persistent volume for my mongoDB on kubernetes on google cloud platform and i'm stuck in pending
Here you have my manifest :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: account-mongo-depl
labels:
app: account-mongo
spec:
replicas: 1
serviceName: 'account-mongo'
selector:
matchLabels:
app: account-mongo
template:
metadata:
labels:
app: account-mongo
spec:
volumes:
- name: account-mongo-storage
persistentVolumeClaim:
claimName: account-db-bs-claim
containers:
- name: account-mongo
image: mongo
volumeMounts:
- mountPath: '/data/db'
name: account-mongo-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: account-db-bs-claim
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: account-mongo-srv
spec:
type: ClusterIP
selector:
app: account-mongo
ports:
- name: account-db
protocol: TCP
port: 27017
targetPort: 27017
Here you have my list of pods
My pods is stuck in pending until fail
If you're on GKE, you should already have dynamic provisioning setup.
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
"do-block-storage"? Were you running on Digital Ocean previously? You should be able remove the storageClassName line and use the default provisioner Google provides.
For example, here's a snippet from one of my own statefulsets
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
## Specify storage class to use nfs (requires nfs provisioner. Leave unset for dynamic provisioning)
#storageClassName: "nfs"
resources:
requests:
storage: 10G
I do not specify a storage class here, so it uses the default one, which provisions as a persistent disk on GCP
$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/gce-pd Delete Immediate false 2y245d

StatefulSet: Longer rolling update lead Version mismatching

Application is deployed on K8s using StatefulSet because of stateful in nature. There is around 250+ pods are running and HPA has been implemented on it too that can scale upto 400 pods.
When new deployment occurs, it takes longer time (~ 10-15m) to update all pods in Rolling Update fashion.
Problem: End user get response from 2 version of pods until all pods are replaced with new revision.
I am googling for an architecture where overall deployment time can be reduced and getting the best possible solutions to use BLUE/GREEN strategy but it has bunch of impact with integrated services like monitoring, logging, telemetry etc because of 2 naming conventions.
Ideally I am looking for a solutions like maxSurge for Deployment in which firstly new pods are created and then traffic are shifted to it at a time but in case of StatefulSet, it won't support maxSurge with RollingUpdate strategy & controller will delete and recreate each Pod in the StatefulSet based on ordinal index from bigger to smaller.
The solution is to do a partitioning rolling update along with a canary deployment.
Let’s suppose we have the statefulset workload defined by the following yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.20"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.20"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.20"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
You could patch the statefulset to create a partition, and change the image and version label for the remaining pods: (In this case, since there are only 3 pods, the last one will be the one that will change its image.)
$ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.21"}]'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/labels/version", "value":"1.21"}]'
At this point, you have a pod with the new image and version label ready to use, but since the version label is different, the traffic is still going to the other two pods. If you change the version in the yaml file and apply the new configuration, the rollout will be transparent, since there is already a pod ready to migrate the traffic:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.21"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.21"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.21"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
$ kubectl apply -f file-name.yaml
Once traffic is migrated to the pod containing the new image and version label, you should patch again the statefulset and remove the partition with the command kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
Note: You will need to be very careful with the size of the partition, since the remaining pods will handle the whole traffic for some time.

How to create a PV and PVC which is accessible by multiple pods and cronjobs in Kubernetes on a local setup

I am trying to create a PV and PVC which is accessible by a pod and a cronjob too on the same node.
Currently I am using the following yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: wwwroot
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "wwwroot"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: wwwroot
name: wwwroot
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
status: {}
in the statefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: web
name: web
spec:
selector:
matchLabels:
app: web
serviceName: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xy:latest
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /app/wwwroot
name: wwwroot
subPath: tempwwwroot
imagePullSecrets:
- name: xy
restartPolicy: Always
serviceAccountName: ""
volumeClaimTemplates:
- metadata:
name: wwwroot
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
This works fine with the pods, but I have a cronjob that needs to access the same wwwroot PV.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: import
name: import
spec:
schedule: "09 16 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: import
image: xy:latest
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/wwwroot
name: wwwroot
subPath: tempwwwroot
imagePullSecrets:
- name: xy
restartPolicy: OnFailure
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: wwwroot
To the best of my knowledge, you cannot bind these PVs to a folder on the host machine like in Docker (which would be nice, but I can live with that). When I run a kubectl get pvc with the following setup, it seem to use the same PVC name wwwroot, but clearly it is not. I have /bin/bash-d into the pod created by the cronjob and the wwwroot folder does not contain the files from the web pod wwwroot folder.
Any idea what am I doing wrong? Also what is the best option the create PV/PVCs on a local 1 node Kubernetes setup? I was also thinking about to create a NFS share on the host for easier access to the files.
Also when first created the PV/PVC and deployed the web statefulSet, the web pod should populate the wwwroot PV with CSS and JS files, but seems like when it bounds to the PV, it deletes everything from that folder and I have to manually copy those files again. Any workaround on that?
You are mounting pv on two pod created and using pv by setting accessModes: ReadWriteOnce
You can check them below.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
Also instant of local you can use storage class NFS volume which can give you read-write-many. I am not sure Storage class local provide it as it is not in official documents. Chk below
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
P.S. Never use storage class in actual deployment as node in k8s come and go.