I have a StatefulSet which looks like this
apiVersion: v1
kind: StatefulSet
metadata:
name: web
spec:
...
volumeClaimTemplates:
— metadata:
name: www
spec:
resources:
requests:
storage: 1Gi
It will create a PersistentVolumeClaim (PVC) and a PersistentVolume (PV) for each Pod of a Service it controls.
I want to execute some commands on those PVs before the Pod creation.
I was thinking to create a Job which mounts those PVs and runs the commands but how do I know how many of PVs were created?
Is there a kubernetes-native solution to trigger some pod execution on PV creation?
The solution is InitContianer.
You can add it to a spec of your StatufulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
...
spec:
initContainers:
- name: init-myapp
image: ubuntu:latest
command:
- bash
- "-c"
- "your command"
volumeMounts:
- name: yourvolume
mountPath: /mnt/myvolume
Related
I am able to deploy the database service itself, but when I try to deploy with a persistent volume claim as well, the deployment silently fails. Below is the deployment.yaml file I am using. The service deploys fine if I remove the first 14 lines that define the persistent volume claim.
apiVersion: apps/v1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: timescale-pvc-1
namespace: my-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: standard
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: timescale
spec:
selector:
matchLabels:
app: timescale
replicas: 1
template:
metadata:
labels:
app: timescale
spec:
containers:
- name: timescale
image: timescale/timescaledb:2.3.0-pg11
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: "password"
- name: POSTGRES_DB
value: "metrics"
volumes:
- name: timescaledb-pv
persistentVolumeClaim:
claimName: timescale-pvc-1
Consider StatefulSet for running stateful apps like databases. Deployment is preferred for stateless services.
You are using the below storage class in the PVC.
storageClassName: standard
Ensure the storage class supports dynamic storage provisioning.
Are you creating a PV along with PVC and Deployment? A Deployment, Stateful set or a Pod can only use PVC if there is a PV available.
If you're creating the PV as well then there's a possibility of a different issue. Please share the logs of your Deployment and PVC
I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:
configmap change doesn't reflect automatically on respective pods
Updated configMap.yaml but it's not being applied to Kubernetes pods
I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?
Here's what wiki.yaml looks like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dot-wiki
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wiki-config
data:
config.json: |
{
"farm": true,
"security_type": "friends",
"secure_cookie": false,
"allowed": "*"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiki-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wiki
template:
metadata:
labels:
app: wiki
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: wiki-config
image: dobbs/farm:restrict-new-wiki
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
containers:
- name: farm
image: dobbs/farm:restrict-new-wiki
command: [
"wiki", "--config", "/etc/config/config.json",
"--admin", "bad password but memorable",
"--cookieSecret", "any-random-string-will-do-the-trick"]
ports:
- containerPort: 3000
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
- name: config-templates
mountPath: /etc/config
volumes:
- name: dot-wiki
persistentVolumeClaim:
claimName: dot-wiki
- name: config-templates
configMap:
name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
name: wiki-service
spec:
ports:
- name: http
targetPort: 3000
port: 80
selector:
app: wiki
In addition to kubectl rollout restart deployment, there are some alternative approaches to do this:
1. Restart Pods
kubectl delete pods -l app=wiki
This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.
2. Version the ConfigMap
Instead of naming your ConfigMap just wiki-config, name it wiki-config-v1. Then when you update your configuration, just create a new ConfigMap named wiki-config-v2.
Now, edit your Deployment specification to reference the wiki-config-v2 ConfigMap instead of wiki-config-v1:
apiVersion: apps/v1
kind: Deployment
# ...
volumes:
- name: config-templates
configMap:
name: wiki-config-v2
Then, reapply the Deployment:
kubectl apply -f wiki.yaml
Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.
As an additional advantage of this approach, if you keep the old ConfigMap (wiki-config-v1) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.
This approach is described in Chapter 1 of Kubernetes Best Practices (O'Reilly, 2019).
For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:
# apply the config changes
kubectl apply -f wiki.yaml
# restart the containers in the deployment
kubectl rollout restart deployment wiki-deployment
You should do nothing but change your ConfigMap, and wait for the changes to be applies. The answer you have posted the link is wrong. After a ConfigMap change, it doesn't apply the changes right away, but can take time. Like 5 minutes, or something like that.
If that doesn't happen, you can report a bug about that specific version of k8s.
I have a DaemonSet that creates flink task manager pods, one per each node.
Nodes
Say I have two nodes
node-A
node-B
Pods
the daemonSet would create
pod-A on node-A
pod-B on node-B
Persistent Volume Claim
I am on AKS and want to use azure-disk for Persistent Storage
According to the docs : [https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv ]
an azure disk can be associated on to single node
say I create
pvc-A for pv-A attached to node-A
pvc-B for pv-B attached to node-B
Question
How can I associate pod-A on node-A to use pcv-A ?
UPDATE:
After much googling, i stumbled upon that it might be better/cleaner to use a StatefulSet instead. This does mean that you won't get the features available to you via DaemonSet like one pod per node.
https://medium.com/#zhimin.wen/persistent-volume-claim-for-statefulset-8050e396cc51
If you use a persistentVolumeClaim in your daemonset definition, and the persistentVolumeClaim is satisfied with PV with the type of hostPath, your daemon pods will read and write to the local path defined by hostPath. This behavior will help you separate the storage using one PVC.
This might not directly apply to your situation but I hope this helps whoever searching for something like a "volumeClaimTemplate for DaemonSet" in the future.
Using the same example as cookiedough (thank you!)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: x
namespace: x
labels:
k8s-app: x
spec:
selector:
matchLabels:
name: x
template:
metadata:
labels:
name: x
spec:
...
containers:
- name: x
...
volumeMounts:
- name: volume
mountPath: /var/log
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-pvc
And that PVC is bound to a PV (Note that there is only one PVC and one PV!)
apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: null
labels:
type: local
name: mem
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /tmp/mem
type: Directory
storageClassName: standard
status: {}
Your daemon pods will actually use /tmp/mem on each node. (There's at most 1 daemon pod on each node so that's fine.)
The way to attach a PVC to your DaemonSet pod is not any different than how you do it with other types of pods. Create your PVC and mount it as a volume onto the pod.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: x
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
This is what the DaemonSet manifest would look like:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: x
namespace: x
labels:
k8s-app: x
spec:
selector:
matchLabels:
name: x
template:
metadata:
labels:
name: x
spec:
...
containers:
- name: x
...
volumeMounts:
- name: volume
mountPath: /var/log
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-pvc
I am working on an application on Kubernetes in GCP and I need a really huge SSD storage for it.
So I created a StorageClass recourse, a PersistentVolumeClaim that requests 500Gi of space and then a Deployment recourse.
StorageClass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
PVC.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-volume
spec:
storageClassName: faster
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: mongo-volume
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-volume
When I applied the PVC, it stuck in Pending... state for hours. I found out experimentally that it binds correctly with maximum 200Gi of requested storage space.
However, I can create several 200Gi PVCs. Is there a way to bind them to one path to work as one big PVC in Deployment.yaml? Or maybe the 200Gi limit can be expanded?
I have just tested it on my own env and it works perfectly. So the problem is in Quotas.
For this check:
IAM & admin -> Quotas -> Compute Engine API Local SSD (GB) "your region"
Amount which you used.
I've created the situation when I`m run out of Quota and it stack in pending status the same as your.
It happens because you create PVC for each pod for 500GB each.
I try to use a persistent volume for my rethinkdb server. But I got this error:
Unable to mount volumes for pod "rethinkdb-server-deployment-6866f5b459-25fjb_default(efd90244-7d02-11e8-bffa-42010a8400b9)": timeout expired waiting for volumes to attach/mount for pod "default"/"rethinkdb-server-deployment-
Multi-Attach error for volume "pvc-f115c85e-7c42-11e8-bffa-42010a8400b9" Volume is already used by pod(s) rethinkdb-server-deployment-58f68c8464-4hn9x
I think that Kubernetes deploy a new node without removed the old one so it can't share le volume between both because my pvc is ReadWriteOnce. This persistent volume must be create in an automatic way, so I can't use persistent disk, format it ...
My configuration:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: rethinkdb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: default
labels:
db: rethinkdb
role: admin
name: rethinkdb-server-deployment
spec:
replicas: 1
selector:
matchLabels:
app: rethinkdb-server
template:
metadata:
name: rethinkdb-server-pod
labels:
app: rethinkdb-server
spec:
containers:
- name: rethinkdb-server
image: gcr.io/$PROJECT_ID/rethinkdb-server:$LAST_VERSION
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8080
name: admin-port
- containerPort: 28015
name: driver-port
- containerPort: 29015
name: cluster-port
volumeMounts:
- mountPath: /data/rethinkdb_data
name: rethinkdb-storage
volumes:
- name: rethinkdb-storage
persistentVolumeClaim:
claimName: rethinkdb-pvc
How do you manage this?
I see that you’ve added the PersistentVolumeClaim within a deployment. I also see that you are trying to scale the node pool.
A PersistentVolumeClaim will work on a deployment, but only if you are not scaling the deployment. This is why that error message showed up. The error that you are seeing says that that volume is already in use by an existing pod when a new pod is replicated.
Because you are trying to scale the deployment, other replicas will try to mount and use the same volume.
Solution: Deploy the PersistentVolumeClaim in a statefulset object, not a deployment. Instructions on how to deploy a statefulset can be found in this article. With a statefulset, you will be able to attach a PersistentVolumeClaim to a pod, then scale the node pool.