Kubernetes up storage for a pod - kubernetes

My pod jenkins nexus pod has run out of disk space and I need to up the persistent volume claim.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
Deleting the pod and quickly trying to update the yaml doesn't work either.
Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?

Our version of kubernetes (1.8) doens't have kubectl stop, so is there a way to stop the replication controller in order to change the yaml?
You can scale RC to 0, and it will stop spawning pods.
I can see the yaml file for this in the kubernetes dashboard, however when I try to change it I get - PersistentVolumeClaim "jenkins-x-nexus" is invalid: spec: Forbidden: field is immutable after creation
That message means that you cannot change the size of your volume. There are several tickets on GitHub about that limitation, and regarding different types of volumes, that one for example.
So, to change size, you need to create a new bigger PVC and somehow migrate your data from old volume to the new one.

Related

How can I expand a PVC for Cassandra on AKS without losing data?

I need to start by saying that I have no experience using Cassandra and I am not the one who who created this deployment.
I have Cassandra running in a cluster in AKS. The PVC as configured in the statefulset is 1000Gi. Currently the pods are out of storage and are in a constant unhealthy state.
I am looking to expand the volumes attached to the pods. The problem I am facing is that I cannot scale down the statefulset because the statefulsets only scale down when all their pods are healthy.
I even tried deleting the statefulset and then recreateing it with a larger PVC (as recomended here)
Howerver, I can't seem to delete the statefulset. It looks to me like the CassandraDatacenter CRD keeps recreating the statefulset as soon as I delete it. Giving me no time to change anything.
My question are as follows:
Is there a standard way to expand the volume without losing data?
What would happen if I scale down the replicas in the CassandraDatacenter? Will it delete the PVC or keep it?
If there is no standard, does anyone have any ideas on how to accomplish expanding the volume size without losing storage?
Ordinarily in a Cassandra cluster, the best practice is to scale horizontally (not vertically). You want more Cassandra nodes to spread the load out to achieve maximum throughput.
The equivalent in Kubernetes is to scale up your deployment. As you increase the node count, the amount of data on each individual Cassandra node will decrease proportionally.
If you really want to resize the PVC, you will only be able to do it dynamically if you have enabled allowVolumeExpansion. You won't lose data as you do this.
Deleting a STS isn't going to work because by design it will be automatically replaced as you already know. You also won't be able to scale down because there isn't enough capacity (disk space) in your cluster if you do. Cheers!
Answer for: How can I expand a PVC for StatefulSet on AKS without loosing data?
While the answer of #Erick Raminez is a very good advice for Cassandra specific, I would like to answers the more general question "How can I expand a PVC for my StatefulSet on AKS without loosing data?".
If downtime is allowed, you can follow these 4 steps:
# Delete StatefulSet
# This is required on AKS since the azure disk must have status "Unattached"
kubectl delete statefulsets.apps STATEFULSET_NAME
# Edit capacity in
# - your statefulset yaml file
# - each pvc
kubectl patch pvc PVC_NAME -p '{"spec": {"resources": {"requests": {"storage": "3Gi"}}}}'
# Deploy updated statefulset yaml (or helm chart)
kubectl apply -f statefulset.yaml
# Verify Capacity
kubectl get pvc
If you don't want downtime, check the first reference for some additional steps.
References:
https://adamrushuk.github.io/increasing-the-volumeclaimtemplates-disk-size-in-a-statefulset-on-aks/
https://serverfault.com/a/989665/561107

Kubernetes Persistent Volume Claim FileSystemResizePending

i have a persistent volume claim for a kubernetes pod which shows the message "Waiting for user to (re-)start a pod to finish file system resize of volume on node." if i check it with 'kubectl describe pvc ...'
The rezising itself worked which was done with terraform in our deployments but this message still shows up here and i'm not really sure how to get this fixed? The pod was already restarted several times - i tried kubectl delete pod and scale it down with kubectl scale deployment.
Does anyone have an idea how to get rid of this message?screenshot
There are few things to consider:
Instead of using the Terraform, try resizing the PVC by editing it manually. After that wait for the underlying volume to be expanded by the storage provider and verify if the FileSystemResizePending condition is present by executing kubectl get pvc <pvc_name> -o yaml. Than, make sure that all the associated pods are restarted so the whole process can be completed. Once file system resizing is done, the PVC will automatically be updated to reflect new size.
Make sure that your volume type is supported for expansion. You can expand the following types of volumes:
gcePersistentDisk
awsElasticBlockStore
Cinder
glusterfs
rbd
Azure File
Azure Disk
Portworx
FlexVolumes
CSI
Check if in your StorageClass the allowVolumeExpansion field is set to true.

kubernetes batch restart all namespace pod to make new config map config works

I am modify config maps environment from DEV to FAT, and now I want to make it works in all my pods in dabai-fat name space.How to restart all pods in the namespace? If I modify one by one it is too slow, and my deployment service have more than 20 now. How to enable the config the easy way?
You should prefer mounted config maps for your solution where you will not need POD restart.
Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync.
Total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet. You can trigger an immediate refresh by updating one of the pod’s annotations. Important to remember that container using a ConfigMap as a subPath volume will not receive ConfigMap updates.
How to Add ConfigMap data to a Volume
You should not edit already existing ConfigMap.
This question Restart pods when configmap updates in Kubernetes? is the best possible answer to your question.
First, use Deployments so it's easy to scale everything.
Second, create new ConfigMap and point Deployment to it. If new ConfigMap is broken the Deployment won't scale and if it's correct, the Deployment will scale to 0 and reschedule new pods that will be using new ConfigMap.

What happens to persistent volume if the StatefulSet got deleted and re-created?

I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message using kafkacat, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume.
Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.
I would probably look at how the persistent volume is created.
If you run the command
kubectl get pv
you can see the Reclaim policy, if it is set to retain, then your volume will survive even when stateful set is deleted
This is the expected behaviour , because the new statefulSet will create a new set of PVs and start over. ( if there is no other choice it can randomly land on old PVs as well , for example local volumes )
StatefulSet doesn't mean that kubernetes will remember what you were doing in some other old statefulset that u have deleted.
Statefulset means that if the pod is restarted or re-created for some reason, the same volume will be assigned to it. This doesn't mean that the volume will be assigned across the StatefulSets.
I assume your scenario is that you have a statefulset which has got a persistentvolumeclaim definition in it - or it is just referencing an existing volume - and you try to delete it.
In this case the persistent volume will stay there. Also the pvc won't disappear;
This is so that you can, if you wanted to, remount the same volume to a different statefulset pod - or an update of the previous one thereof.
If you want to delete the persistent volume you should delete the pvc and the buond PVs will disappear.
Kubernetes default prevent deleting PersistentVolumeClaims and bounded PersistentVolume objects when you scaling StatefulSet down or deleting them.
Retaining PersistentVolumeClaims is the default behavior, but you can configure the StatefulSet to delete them via the persistentVolumeClaimRetentionPolicy field.
This example shows part of StatefulSet manifest file, where retention policy causes deleting PersistentVolumeClaim when StatefulSet is scaled down, and retaining when StatefulSet is deleted.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: quiz
spec:
persistentVolumeClaimRetentionPolicy:
whenScaled: Delete
whenDeleted: Retain
Make sure you have properly configured StatefulSet manifest and Kafka cluster.
NOTE
If you want to delete a StatefulSet but keep the Pods and the
PersistentVolumeClaims, you can use the --cascade=orphan option. In
this case, the PersistentVolumeClaims will be preserved even if the
retention policy is set to Delete.
Marko Lukša "Kubernetes in Action, Second Edition"

Difference bw configmap and downwardapi

I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time