I have a pod created using a deployment using git-sync image and mount the volume to a PVC
kind: Deployment
metadata:
name: config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
demo: config
template:
metadata:
labels:
demo: config
spec:
containers:
- args:
- '-ssh'
- '-repo=git#domain.com:org/repo.git'
- '-dest=conf'
- '-branch=master'
- '-depth=1'
image: 'k8s.gcr.io/git-sync:v3.1.1'
name: git-sync
securityContext:
runAsUser: 65533
volumeMounts:
- mountPath: /etc/git-secret
name: git-secret
readOnly: true
- mountPath: /config
name: cus-config
securityContext:
fsGroup: 65533
volumes:
- name: git-secret
secret:
defaultMode: 256
secretName: git-creds
- name: cus-config
persistentVolumeClaim:
claimName: cus-config
After the deployment, I checked the pod and got a file path like this.
/tmp/git/conf/subdirA/some.Files
Then I created a second pod from another deployment and want to mount the tmp/git/conf/subdirA on the second pod. This is the example of my second deployment script.
kind: Deployment
metadata:
name: test-mount-config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: 'nginx:1.7.9'
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /root/conf
name: config
subPath: tmp/git/conf/subdirA
volumes:
- name: config
persistentVolumeClaim:
claimName: cus-config
This is my PVC
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: conf
name: config
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: conf
namespace: test
provisioner: spdbyz
reclaimPolicy: Retain
I already read about subpath on PVC, but everytime I checked the folder /root/conf on the second pod, there is nothing inside it.
Any idea on how to mount specific PVC subdirectory on another pod?
Very basic example on how share file content between PODs using PV/PVC
First Create a persistent volume refer below yaml example with hostPath configuration
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv-1
labels:
pv: my-pv-1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/log/mypath
$ kubectl create -f pv.yaml
persistentvolume/my-pv-1 created
Second create a persistent volume claim using below yaml example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-claim-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv: my-pv-1
$ kubectl create -f pvc.yaml
persistentvolumeclaim/my-pvc-claim-1 created
Verify the pv and pvc STATUS is set to BOUND
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv-1 1Gi RWX Retain Bound default/my-pvc-claim-1 62s
$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc-claim-1 Bound my-pv-1 1Gi RWX 58
Third consume the pvc in required PODs refer below example yaml where the volume is mounted on two pods nginx-1 and nginx-2.
apiVersion: v1
kind: Pod
metadata:
name: nginx-1
spec:
containers:
- image: nginx
name: nginx-1
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-1.yaml
pod/nginx-1 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 35s 10.244.3.53 k8s-node-3 <none> <none>
Create second POD and consume same PVC
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
spec:
containers:
- image: nginx
name: nginx-2
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-2.yaml
pod/nginx-2 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 55s 10.244.3.53 k8s-node-3 <none> <none>
nginx-2 1/1 Running 0 35s 10.244.3.54 k8s-node-3 <none> <none>
Test by connecting to container 1 and write to the file on mount-path.
root#nginx-1:/# df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 12G 7.3G 4.4G 63% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 12G 7.3G 4.4G 63% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root#nginx-1:/# cd /var/log/mypath/
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# cat date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020
Now connect tow second POD/container and it should see the file from first as below
$ kubectl exec -it nginx-2 -- /bin/bash
root#nginx-2:/# cat /var/log/mypath/date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020
Related
I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
I manually created directories on every node under /opt/certs and /opt/registry.
But when I try to deploy the manifest without hardcoded nodeSelectorTerms on tha control plane I get error:
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes#kubernetes1:/opt/registry$
Do you know how I can let Kubernetes to decide where to deploy the pod?
Lets try the following(disregard the paths you currently have and use the ones in the example, (then you can change it), we can adapt it to your needs once dynamic provisioning is working, at the very bottom theres mysql image as an example, use busybox or leave it as it is to get a better understanding:
NFS Server install. Create NFS Share on File Server (Usually master node)
#Include prerequisites
sudo apt update -y # Run updates prior to installing
sudo apt install nfs-kernel-server # Install NFS Server
sudo systemctl enable nfs-server # Set nfs-server to load on startups
sudo systemctl status nfs-server # Check its status
# check server status
root#worker03:/home/brucelee# sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 2732 (code=exited, status=0/SUCCESS)
Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.
# Prepare an empty folder
sudo su # enter root
nfsShare=/nfs-share
mkdir $nfsShare # create folder if it doesn't exist
chown nobody: $nfsShare
chmod -R 777 $nfsShare # not recommended for production
# Edit the nfs server share configs
vim /etc/exports
# add these lines
/nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
# Export directory and make it available
sudo exportfs -rav
# Verify nfs shares
sudo exportfs -v
# Enable ingress for subnet
sudo ufw allow from x.x.x.x/24 to any port nfs
# Check firewall status - inactive firewall is fine for testing
root#worker03:/home/brucelee# sudo ufw status
Status: inactive
NFS Client install (Worker nodes)
# Install prerequisites
sudo apt update -y
sudo apt install nfs-common
# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount
# Unmount
sudo umount $localMount
Dinamic provisioning and Storage class defaulted
# Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy
# Deploying the service accounts, accepting defaults
k create -f rbac.yaml
# Editing storage class
vim class.yaml
##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################
# Deploying storage class
k create -f class.yaml
# Sample output
stoic#masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-ssd k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 33s
nfs-class kubernetes.io/nfs Retain Immediate true 193d
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12d
# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default
# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml
##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: X.X.X.X # change this value
- name: NFS_PATH
value: /nfs-share # change this value
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.93 # change this value
path: /nfs-share # change this value
##############################################
# Creating nfs provisioning service pod
k create -f deployment.yaml
# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic#masternode: $ k describe deployments.apps nfs-client-provisioner
Name: nfs-client-provisioner
Namespace: default
CreationTimestamp: Sat, 14 Aug 2021 00:09:24 +0000
Labels: app=nfs-client-provisioner
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nfs-client-provisioner
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=nfs-client-provisioner
Service Account: nfs-client-provisioner
Containers:
nfs-client-provisioner:
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Port: <none>
Host Port: <none>
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: X.X.X.X
NFS_PATH: /nfs-share
Mounts:
/persistentvolumes from nfs-client-root (rw)
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: X.X.X.X
Path: /nfs-share
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetCreated
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
OldReplicaSets: <none>
NewReplicaSet: nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m47s deployment-controller Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1
# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(#.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{#.metadata.name}{"\n"}{end}')
PersistentVolumeClaim (Notice the storageClassName it is the one defined on the previous step)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-persistentvolume-claim
namespace: default
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
PersistentVolume
It is created dinamically ! confirm if it is here with the correct values running this command:
kubectl get pv -A
Deployment
On your deployment you need two things, volumeMounts (for each container) and volumes (for all containers).
Notice: VolumeMounts->name=data and volumes->name=data because they should match. And claimName is my-persistentvolume-claim which is the same as you PVC.
...
spec:
containers:
- name: mysql
image: mysql:8.0.30
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: data
persistentVolumeClaim:
claimName: my-persistentvolume-claim
I want to create a private Kubernetes registry from this tutorial: https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/
I implemented this:
Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
Copy-paste private-registry.yaml into /opt/registry
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes#kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes#kubernetes1:/opt/registry$
I have the following questions:
I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under /opt/registry and deploy images on all work nodes without using shared folders?
As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under /opt/registry and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into /opt/registry to be synchronized automatically by Kubernetes.
Do you know how I can debug this configuration? As you can see pod is not starting.
EDIT: Log file:
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes#kubernetes1:/opt/registry$
Attempt 2:
I tried this configuration deployed from control plane:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
Note! control plane hostname is kubernetes1 so I changed the value into above configuration. I get this:
kubernetes#kubernetes1:~$ cd /opt/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
Unfortunately again the image is not created.
For 1st question, you can try creating PersistentVolume with node affinity set to specific controlplane node and tie it with the deployment via PersistentVolumeClaim.Here's an example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes.io/hostname
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
For question # 2, can you share the logs of your pod?
You can try with following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteMany
I am deploying a CouchDB cluster on Kubernetes.
It worked and I got an error when I tried to scale it.
I try scale my Statefulset and I am getting this error when I desscribe couchdb-3:
0/3 nodes are available: 3 pod has unbound immediate
PersistentVolumeClaims.
And this error when I describe hpa:
invalid metrics (1 invalid out of 1), first error is: failed to get
cpu utilization: missing request for cpu
failed to get cpu utilization: missing request for cpu
I run "kubectl get pod -o wide" and receive this result:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
couchdb-0 1/1 Running 0 101m 10.244.2.13 node2 <none> <none>
couchdb-1 1/1 Running 0 101m 10.244.2.14 node2 <none> <none>
couchdb-2 1/1 Running 0 100m 10.244.2.15 node2 <none> <none>
couchdb-3 0/1 Pending 0 15m <none> <none> <none> <none>
How can I fix it !?
Kubernetes Version: 1.22.4
Docker Version 20.10.11, build dea9396
Ubuntu 20.04
My hpa file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-couchdb
spec:
maxReplicas: 16
minReplicas: 6
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: couchdb
targetCPUUtilizationPercentage: 50
pv.yaml:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-0
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-1
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couch-vol-2
labels:
volume: couch-volume
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.1.100
path: "/var/couchnfs/couchdb-2"
I set nfs in /etc/exports: /var/couchnfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
statefulset.yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: couchdb
labels:
app: couch
spec:
replicas: 3
serviceName: "couch-service"
selector:
matchLabels:
app: couch
template:
metadata:
labels:
app: couch # pod label
spec:
containers:
- name: couchdb
image: couchdb:2.3.1
imagePullPolicy: "Always"
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
- name: COUCHDB_SECRET
value: b1709267
- name: ERL_FLAGS
value: "-name couchdb#$(NODENAME)"
- name: ERL_FLAGS
value: "-setcookie b1709267" # the “password” used when nodes connect to each other.
ports:
- name: couchdb
containerPort: 5984
- name: epmd
containerPort: 4369
- containerPort: 9100
volumeMounts:
- name: couch-pvc
mountPath: /opt/couchdb/data
volumeClaimTemplates:
- metadata:
name: couch-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector:
matchLabels:
volume: couch-volume
You have 3 persistent volumes and 3 pods claiming each. One PV can’t be claimed by more than one pod.
Since you are using NFS as backend, you can use dynamic provisioning of persistent volumes.
https://github.com/openebs/dynamic-nfs-provisioner
I have a pod with PVC request of 10Gi and is successfully bound to PV(both definition below)
I came across the accepted answer of similar question, which suggests to run kubectl -n <namespace> exec <pod-name> df
Upon doing the same i got the following -
pavan#p1: kubectl exec mysql-deployment-95f7dd544-mmjv9 df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 51572172 5797112 43640736 12% /
tmpfs 65536 0 65536 0% /dev
tmpfs 1021680 0 1021680 0% /sys/fs/cgroup
/dev/vda1 51572172 5797112 43640736 12% /etc/hosts
shm 65536 0 65536 0% /dev/shm
tmpfs 1021680 12 1021668 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1021680 0 1021680 0% /proc/acpi
tmpfs 1021680 0 1021680 0% /sys/firmware
pavan#p1: kubectl exec mysql-deployment-95f7dd544-mmjv9 -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 50G 5.6G 42G 12% /
tmpfs 64M 0 64M 0% /dev
tmpfs 998M 0 998M 0% /sys/fs/cgroup
/dev/vda1 50G 5.6G 42G 12% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 998M 12K 998M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 998M 0 998M 0% /proc/acpi
tmpfs 998M 0 998M 0% /sys/firmware
I coudn't quite understand the o/p, i reqested for 10Gi and i don't see any mount with 10gi total capacity?
PV definition :
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mysql"
PVC definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: mysql-pod
template:
metadata:
labels:
app: mysql-pod
spec:
containers:
- name: mysql-container
image: mysql:5.7
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
P.s: Node has a capacity of 50G
EDIT 1:
PV describe:
pavan#p1: kubectl describe pv/mysql-pv-volume
Name: mysql-pv-volume
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"mysql-pv-volume"},"spec":{"acc...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/mysql
HostPathType:
Events: <none>
PVC describe:
pavan#p1: kubectl describe pvc/mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"acc...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-deployment-95f7dd544-mmjv9
Events: <none>
Pod describe:
pavan#p1: kubectl describe pod/mysql-deployment-95f7dd544-mmjv9
Name: mysql-deployment-95f7dd544-mmjv9
Namespace: default
Priority: 0
Node: pay0k-k8-dev-bytq/10.130.219.196
Start Time: Mon, 09 Sep 2019 18:14:17 +0800
Labels: app=mysql-pod
pod-template-hash=95f7dd544
Annotations: <none>
Status: Running
IP: 10.244.0.123
Controlled By: ReplicaSet/mysql-deployment-95f7dd544
Containers:
mysql-container:
Container ID: docker://83f4730892fd6908ef3dfae3b9125d25cb7467d24df89323c43d3ab136376147
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 09 Sep 2019 18:14:19 +0800
Ready: True
Restart Count: 0
Environment:
MYSQL_DATABASE: <set to the key 'mysql-database' of config map 'mysqldb'> Optional: false
MYSQL_ROOT_PASSWORD: <set to the key 'mysql-root-password' in secret 'db-credentials'> Optional: false
MYSQL_USER: <set to the key 'mysql-user' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'mysql-password' in secret 'db-credentials'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gbpxc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gbpxc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gbpxc
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 41m default-scheduler Successfully assigned default/mysql-deployment-95f7dd544-mmjv9 to pay0k-k8-dev-bytq
Normal Pulled 41m kubelet, pay0k-k8-dev-bytq Container image "mysql:5.7" already present on machine
Normal Created 41m kubelet, pay0k-k8-dev-bytq Created container mysql-container
Normal Started 41m kubelet, pay0k-k8-dev-bytq Started container mysql-container
In your PV definition your specify hostPath, so your data is stored directly on worker - this is why you see 50GB and you are skipping this additional layer provided by cloud provider which create pv directly from pvc.
Keeping data directly on a node isn't good approach because node could be removed/replaced at any time.
You should use digital ocean pvs instead of you're manually defined, if you spin new pod on digital ocean's pv output from df should show 10GB.
I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in following
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO
#kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
orientdbservice 10.0.0.16 <nodes> 2480:30552/TCP 4h
#kubectl get pods
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 4h
#kubectl describe pod orientdbservice-458328598-zsmw5
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4h 1m 37 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
4h 1m 37 kubelet, minikube Warning FailedSync Error syncing pod
I see the following error
Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod
Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.
minikube version: v0.20.0
Appreciate all the help
Your configuration is fine.
Tested under minikube v0.24.0, minikube v0.25.0 and minikube v0.26.1 without any problem.
Take in mind that minikube is under active development, and, specially if you're under windows, is like they say experimental software.
Update to a newer version of minikube and redeploy it. This should solve the problem.
You can check for updates with the minikube update-check command which results in something like this:
$ minikube update-check
CurrentVersion: v0.25.0
LatestVersion: v0.26.1
To upgrade minikube simply type minikube delete which deletes your current minikube installation and download the new release as described.
$ minikube delete
There is a newer version of minikube available (v0.26.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.26.1
To disable this notification, run the following:
minikube config set WantUpdateNotification false
Deleting local Kubernetes cluster...
Machine deleted.
For somereason the provisioner provisioner: k8s.io/minikube-hostpath in minikube doesn't work.
So:
delete default storage class kubectl delete storageclass standard
create following storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain
Also in your volume mounts, you have one PVC bound to one PV, so instead of multiple volumes just have one volume and mount them with different subpaths, that will create three subdirectories(backup, config & databases) on your host's /data directory:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb
mountPath: /data/orientdb/config
subPath: config
- name: orientdb
mountPath: /data/orientdb/databases
subPath: databases
- name: orientdb
mountPath: /data/orientdb/backup
subPath: backup
volumes:
- name: orientdb
persistentVolumeClaim:
claimName: orientdb-pv-claim
- Now deploy your yaml: kubectl create -f yourorientdb.yaml