I am trying to use the VolumeSnapshot backup mechanism promoted in kubernetes to beta from 1.17.
Here is my scenario:
Create the nginx deployment and the PVC used by it
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc
name: nginx-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: premium-rwo
Exec into the running nginx container, cd into the PVC mounted path and create some files
▶ k exec -it nginx-deployment-84765795c-7hz5n bash
root#nginx-deployment-84765795c-7hz5n:/# cd /root/test
root#nginx-deployment-84765795c-7hz5n:~/test# touch {1..10}.txt
root#nginx-deployment-84765795c-7hz5n:~/test# ls
1.txt 10.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt lost+found
root#nginx-deployment-84765795c-7hz5n:~/test#
Create the following VolumeSnapshot using as source the nginx-pvc
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
namespace: default
name: nginx-volume-snapshot
spec:
volumeSnapshotClassName: pd-retain-vsc
source:
persistentVolumeClaimName: nginx-pvc
The VolumeSnapshotClass used is the following
apiVersion: snapshot.storage.k8s.io/v1beta1
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
kind: VolumeSnapshotClass
metadata:
creationTimestamp: "2020-09-25T09:10:16Z"
generation: 1
name: pd-retain-vsc
and wait until it becomes readyToUse: true
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
creationTimestamp: "2020-11-04T09:38:00Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshot-as-source-protection
generation: 1
name: nginx-volume-snapshot
namespace: default
resourceVersion: "34170857"
selfLink: /apis/snapshot.storage.k8s.io/v1beta1/namespaces/default/volumesnapshots/nginx-volume-snapshot
uid: ce1991f8-a44c-456f-8b2a-2e12f8df28fc
spec:
source:
persistentVolumeClaimName: nginx-pvc
volumeSnapshotClassName: pd-retain-vsc
status:
boundVolumeSnapshotContentName: snapcontent-ce1991f8-a44c-456f-8b2a-2e12f8df28fc
creationTime: "2020-11-04T09:38:02Z"
readyToUse: true
restoreSize: 8Gi
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Delete the nginx deployment and the initial PVC
▶ k delete pvc,deploy --all
persistentvolumeclaim "nginx-pvc" deleted
deployment.apps "nginx-deployment" deleted
Create a new PVC, using the previously created VolumeSnapshot as its dataSource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers: null
labels:
name: nginx-pvc-restored
name: nginx-pvc-restored
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
dataSource:
name: nginx-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
▶ k create -f nginx-pvc-restored.yaml
persistentvolumeclaim/nginx-pvc-restored created
▶ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc-restored Bound pvc-56d0a898-9f65-464f-8abf-90fa0a58a048 8Gi RWO standard 39s
Set the name of the new (restored) PVC to the nginx deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: my-pvc
mountPath: /root/test
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: nginx-pvc-restored
and create the Deployment again
▶ k create -f nginx-deployment-restored.yaml
deployment.apps/nginx-deployment created
cd into the PVC mounted directory. It should contain the previously created files but its empty
▶ k exec -it nginx-deployment-67c7584d4b-l7qrq bash
root#nginx-deployment-67c7584d4b-l7qrq:/# cd /root/test
root#nginx-deployment-67c7584d4b-l7qrq:~/test# ls
lost+found
root#nginx-deployment-67c7584d4b-l7qrq:~/test#
▶ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.12", GitCommit:"5ec472285121eb6c451e515bc0a7201413872fa3", GitTreeState:"clean", BuildDate:"2020-09-16T13:39:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-gke.1504", GitCommit:"17061f5bd4ee34f72c9281d49f94b4f3ac31ac25", GitTreeState:"clean", BuildDate:"2020-10-19T17:00:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
This is a community wiki answer posted for more clarity of the current problem. Feel free to expand on it.
As mentioned by #pkaramol, this is an on-going issue registered under the following thread:
Creating an intree PVC with datasource should fail #96225
What happened: In clusters that have intree drivers as the default
storageclass, if you try to create a PVC with snapshot data source and
forget to put the csi storageclass in it, then an empty PVC will be
provisioned using the default storageclass.
What you expected to happen: PVC creation should not proceed and
instead have an event with an incompatible error, similar to how we
check proper csi driver in the csi provisioner.
This issue has not yet been resolved at the moment of writing this answer.
Related
I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.
Whats the Problem?
I can't get my pods running which are using a volume. In the Kubernetes Dashboard I got the following error:
running "VolumeBinding" filter plugin for pod "influxdb-6979bff6f9-hpf89": pod has unbound immediate PersistentVolumeClaims
What did I do?
After running Kompose convert to my docker-compose.yml file I tried to start the pods with micro8ks kubectl apply -f . (I am using micro8ks) I had to replace the version of the networkpolicy yaml files with networking.k8s.io/v1 (see here) but except of this change, I didn't change anything.
YAML Files
influxdb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: influxdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/cloud-net: "true"
io.kompose.network/default: "true"
io.kompose.service: influxdb
spec:
containers:
- env:
- name: INFLUXDB_HTTP_LOG_ENABLED
value: "false"
image: influxdb:1.8
imagePullPolicy: ""
name: influxdb
ports:
- containerPort: 8086
resources: {}
volumeMounts:
- mountPath: /var/lib/influxdb
name: influx
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: influx
persistentVolumeClaim:
claimName: influx
status: {}
influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8086
selector:
io.kompose.service: influxdb
status:
loadBalancer: {}
influx-persistenvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: influx
name: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
The PersistentVolumeClaim will be unbound if either the cluster does not have a StorageClass which can dynamically provision a PersistentVolume or it does not have a manually created PersistentVolume to satisfy the PersistentVolumeClaim
Here is a guide on how to configure a pod to use PersistentVolume
To solve the current scenario you can manually create a PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Please note usage of hostPath is only as an example. It's not recommended for production usage. Consider using external block or file storage from the supported types here
I am new to k8's and trying to update storageClassName in a StatefulSet.(from default to default-t1 only change in the yaml)
I tried running kubectl apply -f test.yaml
The only difference between 1st and 2nd Yaml(one using to apply update) is storageClassName: default-t1 instead of default
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: default
resources:
requests:
storage: 1Gi
Every-time I try to update it I get The StatefulSet "web" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
What am I missing or what steps should I take to do this?
There's no easy way as far as I know but it's possible.
Save your current StatefulSet configuration to yaml file:
kubectl get statefulset some-statefulset -o yaml > statefulset.yaml
Change storageClassName in volumeClaimTemplate section of StatefulSet configuration saved in yaml file
Delete StatefulSet without removing pods using:
kubectl delete statefulset some-statefulset --cascade=orphan
Recreate StatefulSet with changed StorageClass:
kubectl apply -f statefulset.yaml
For each Pod in StatefulSet, first delete its PVC (it will get stuck in terminating state until you delete the pod) and then delete the Pod itself
After deleting each Pod, StatefulSet will recreate a Pod (and since there is no PVC) also a new PVC for that Pod using changed StorageClass defined in StatefulSet.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: <storage class name>
resources:
requests:
storage: 1Gi
in the above file add storage class and you can apply it like.
kubectl apply -f .yaml
First line of StatefulSet shoud define an "apiVersion", you can see example here statefulset. Please add it in first line:
apiVersion: apps/v1
Could you show me output of your 'www' PVC file?
kubectl get pvc www -o yaml
In PVC you have field "storageClassName", which should be set on your StorageClass you want to use, so in your case it would be:
storageClassName: default-t1
I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using kubectl create -f my_spec.yaml)
The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.
I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).
P.S. It's obvious how to do that if you have control over the command of the container. But in case of launching arbitrary image, I have no control over the command attribute as I do not know it.
apiVersion: batch/v1
kind: Job
metadata:
generateName: puzzle
spec:
template:
spec:
containers:
- name: main
image: arbitrary-image
env:
- name: my_var
valueFrom: <Contents of /mnt/my_var_value.txt>
volumeMounts:
- name: my-vol
path: /mnt
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc
You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).
apiVersion: v1
kind: ServiceAccount
metadata:
name: cm-creator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: cm-creator
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-creator
namespace: default
subjects:
- kind: User
name: system:serviceaccount:default:cm-creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cm-creator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-creator
namespace: default
labels:
app: cm-creator
spec:
replicas: 1
serviceAccountName: cm-creator
selector:
matchLabels:
app: cm-creator
template:
metadata:
labels:
app: cm-creator
spec:
containers:
- name: cm-creator
image: bitnami/kubectl
command:
- /bin/bash
- -c
args:
- while true;
kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-;
sleep 60;
done
volumeMounts:
- name: my-vol
path: /mnt
readOnly: true
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc
I have a DaemonSet that creates flink task manager pods, one per each node.
Nodes
Say I have two nodes
node-A
node-B
Pods
the daemonSet would create
pod-A on node-A
pod-B on node-B
Persistent Volume Claim
I am on AKS and want to use azure-disk for Persistent Storage
According to the docs : [https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv ]
an azure disk can be associated on to single node
say I create
pvc-A for pv-A attached to node-A
pvc-B for pv-B attached to node-B
Question
How can I associate pod-A on node-A to use pcv-A ?
UPDATE:
After much googling, i stumbled upon that it might be better/cleaner to use a StatefulSet instead. This does mean that you won't get the features available to you via DaemonSet like one pod per node.
https://medium.com/#zhimin.wen/persistent-volume-claim-for-statefulset-8050e396cc51
If you use a persistentVolumeClaim in your daemonset definition, and the persistentVolumeClaim is satisfied with PV with the type of hostPath, your daemon pods will read and write to the local path defined by hostPath. This behavior will help you separate the storage using one PVC.
This might not directly apply to your situation but I hope this helps whoever searching for something like a "volumeClaimTemplate for DaemonSet" in the future.
Using the same example as cookiedough (thank you!)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: x
namespace: x
labels:
k8s-app: x
spec:
selector:
matchLabels:
name: x
template:
metadata:
labels:
name: x
spec:
...
containers:
- name: x
...
volumeMounts:
- name: volume
mountPath: /var/log
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-pvc
And that PVC is bound to a PV (Note that there is only one PVC and one PV!)
apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: null
labels:
type: local
name: mem
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /tmp/mem
type: Directory
storageClassName: standard
status: {}
Your daemon pods will actually use /tmp/mem on each node. (There's at most 1 daemon pod on each node so that's fine.)
The way to attach a PVC to your DaemonSet pod is not any different than how you do it with other types of pods. Create your PVC and mount it as a volume onto the pod.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
namespace: x
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
This is what the DaemonSet manifest would look like:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: x
namespace: x
labels:
k8s-app: x
spec:
selector:
matchLabels:
name: x
template:
metadata:
labels:
name: x
spec:
...
containers:
- name: x
...
volumeMounts:
- name: volume
mountPath: /var/log
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-pvc