Sentinel stateful set schedule fail, can't find persistent volumes to bind - kubernetes

Good afternoon
I really need some help getting a group of sentinels up so that they can monitor and perform elections for my redis pods, which are running without issue. At the bottom of this message I have included the sentinel config, which spells out the volumes. The first sentinel, sentinel0, sits at Pending, while the rest of the redis instances are READY 1/1, for all three.
But they don't get scheduled. When I attempt to apply the sentinel statefulset, I get the following schedule error. The sentinel statefulset config is at the bottom of this post
Warning FailedScheduling 5s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 4s default-scheduler 0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
About my kubernetes setup:
I am running a four-node baremetal kubernetes cluster; one master node and three worker nodes respectively.
For storage, I am using a 'local-storage' StorageClass shared across the nodes. Currently I am using a single persistent volume configuration file which defines three volumes across three nodes. This seems to be working out for the redis statefulset, but not sentinel. (sentiel config at bottom)
See below config of persistent volume (all three pv-volume-node-0, 1, 2 all are bound)
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-1
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-1
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-2
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-2
Note: the path "/var/opt/mssql" is the stateful directory data pt for the redis cluster. It's a misnomer and in no way reflects a sql database (I just used this directory from a walkthrough), and it works.
Presently all three redis pods are successfully deployed with a functioning statefulset, see below for the redis config (all working)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.0-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /var/opt/mssql
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
clusterIP: None
ports:
- port: 6379
targetPort: 6379
name: redis
selector:
app: redis
The real issue I'm having, I believe spawns from how I've configured the sentinel statefulset. The pods won't schedule and its printed reason is it isn't finding persistent volumes to bind from.
SENTINEL STATEFULSET CONFIG, problem here, can't figure out how to set it up right with the volumes I made.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sentinel
spec:
serviceName: sentinel
replicas: 3
selector:
matchLabels:
app: sentinel
template:
metadata:
labels:
app: sentinel
spec:
initContainers:
- name: config
image: redis:6.0-alpine
command: [ "sh", "-c" ]
args:
- |
REDIS_PASSWORD=a-very-complex-password-here
nodes=redis-0.redis.redis.svc.cluster.local,redis-1.redis.redis.svc.cluster.local,redis-2.redis.redis.svc.cluster.local
for i in ${nodes//,/ }
do
echo "finding master at $i"
MASTER=$(redis-cli --no-auth-warning --raw -h $i -a $REDIS_PASSWORD info replication | awk '{print $1}' | grep master_host: | cut -d ":" -f2)
if [ "$MASTER" == "" ]; then
echo "no master found"
MASTER=
else
echo "found $MASTER"
break
fi
done
echo "sentinel monitor mymaster $MASTER 6379 2" >> /tmp/master
echo "port 5000
$(cat /tmp/master)
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
sentinel auth-pass mymaster $REDIS_PASSWORD
" > /etc/redis/sentinel.conf
cat /etc/redis/sentinel.conf
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
containers:
- name: sentinel
image: redis:6.0-alpine
command: ["redis-sentinel"]
args: ["/etc/redis/sentinel.conf"]
ports:
- containerPort: 5000
name: sentinel
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: data
mountPath: /var/opt/mssql
volumes:
- name: redis-config
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: sentinel
spec:
clusterIP: None
ports:
- port: 5000
targetPort: 5000
name: sentinel
selector:
app: sentinel
This is my first post here. I am a big fan of stackoverflow!

You may try to create three PVs using this template:
kind: PersistentVolume
apiVersion: v1
metadata:
name: ag1-pv-volume-node-0
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: data-redis-0
local:
path: "/var/opt/mssql"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s-node-0
Important part here is claimRef field which ties PV with PVC with StatefulSet.
It should be of special format.
Read more here: https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#using_a_preexisting_disk_in_a_statefulset

Related

Redis cluster on minikube stays in "pending" state

I have installed minikube on local windows machine. Trying to install redis cluster.
I ran all cluster using kubectl create -f <resource> -n <namespace>. Following are the files that were used to create clusters.
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
Persistent volumes.
kind: PersistentVolume
metadata:
name: local-pv1
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv2
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv3
spec:
storageClassName: local-storage
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/storage/data3"
Redis config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
echo "creating nodes"
if [ -f ${CLUSTER_CONFIG} ]; then
echo "[ INFO ]File:${CLUSTER_CONFIG} is Found"
else
touch $CLUSTER_CONFIG
fi
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
echo "done"
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
Statefulset redis cluster
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:6.2.3-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" == "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:6.2.3-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 500Mi
Head less redis service
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
namespace: redis
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
This is what comes on getting pods
redis-cluster-0 0/1 Pending 0 2d
On describing the pods this is the message shown. Not sure if this is an issue
Warning FailedScheduling 6m24s (x110 over 46h) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
It's likely that your cluster has no storageClassName: local-storage. Check your storage classes using kubectl get sc. To fix this, you need to do any of the following:
Create a storage class called local-storage
Modify your manifest file's storage-class field to any pre-existing SC.

Kubernetes MountVolume.NewMounter initialization failed for volume [name] : path [name] does not exist

i am trying to deploy elasticsearch cluster on Kubernetes, for that i am using local persistent volumes
here is my manifest files
persistantvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/kb/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
service.yaml
apiVersion: v1
kind: Service
metadata:
name: es
labels:
service: elasticsearch
spec:
clusterIP: None
ports:
- port: 9200
name: serving
- port: 9300
name: node-to-node
selector:
service: elasticsearch
elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
service: elasticsearch
template:
metadata:
labels:
service: elasticsearch
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch-cluster
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 10Gi
kubectl apply -f persistantvolume.yaml
kubectl apply -f storage.yaml
kubectl apply -f service.yaml
kubectl apply -f elasticsearch.yaml
my pod is in Init:0/3 state and kube describe pod podname is
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned default/elasticsearch-0 to minikube
Warning FailedMount 12s (x7 over 44s) kubelet MountVolume.NewMounter initialization failed for volume "example-local-pv" : path "/home/kb/data" does not exist
i am a beginner in Kubernetes please help me what i am missing here /home/kb/data do exists in my local drive
Assuming you launched minikube with one of its VM drivers, the /home/kb/data directory exists in your local drive but probably NOT inside its VM. Does that make sense? The Kubernetes local-storage thing won't create missing directories. If you JUST want to "fix the error", then minikube ssh -- mkdir /home/kb/data might do the trick. This answer explains more background details about this.

HostPath assign persistentVolume to the specific work node in cluster

Using kubeadm to create a cluster, I have a master and work node.
Now I want to share a persistentVolume in the work node, which will be bound with Postgres pod.
Expecting the code will create persistentVolume in the path /postgres of work node, but it seems the hostPath will not work in a cluster, how should I assign this property to the specific node?
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/postgres"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
containers:
- name: postgres
image: postgres:12
imagePullPolicy: Always
env:
- name: DB_USER
value: postgres
- name: DB_PASS
value: postgres
- name: DB_NAME
value: postgres
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: "/postgres"
name: vol-postgres
livenessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- pg_isready
- -h
- localhost
- -U
- postgres
initialDelaySeconds: 5
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: postgres
port: 5432
targetPort: postgres
selector:
app: postgres
As per docs.
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
In short, hostPath type refers to node (machine or VM) resource, where you will schedule pod. It mean that you already need to have this folder on this node.
To assign resources to specify node you have to use nodeSelector in your Deployment, PV.
Depends on the scenario, using hostPath is not the best idea, however I will provide below example YAMLs which might show you concept. Based on your YAMLs but with nginx image.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/postgres" ## this folder need exist on your node. Keep in minds also who have permissions to folder. Used tmp as it have 3x rwx
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /home ## path to folder inside container
name: vol-postgres
affinity: ## specified affinity to schedule all pods on this specific node with name ubuntu18-kubeadm-worker1
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
persistentVolumeClaim:
claimName: pvc-postgres
persistentvolume/pv-postgres created
persistentvolumeclaim/pvc-postgres created
deployment.apps/postgres created
Unfortunately PV is bounded to PVC in 1:1 relationship, so for each time, you would need to create PV and PVC.
However if you are using hostPath it's enough to specify nodeAffinity, volumeMounts and volumes in Deployment YAML without PV and PVC.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 1
strategy: {}
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- mountPath: /home
name: vol-postgres
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ubuntu18-kubeadm-worker1
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
volumes:
- name: vol-postgres
hostPath:
path: /tmp/postgres
deployment.apps/postgres created
user#ubuntu18-kubeadm-master:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-77bc9c4566-jgxqq 1/1 Running 0 9s
user#ubuntu18-kubeadm-master:~$ kk exec -ti postgres-77bc9c4566-jgxqq /bin/bash
root#ubuntu18-kubeadm-worker1:/# cd home
root#ubuntu18-kubeadm-worker1:/home# ls
test.txt txt.txt
There are ways to achieve it. You can mount your volume into a NAS or create a storage cluster using disks and create a persistent volume and persistent volume claim for that. If your use-case is to have persistence in local storage then you can create a local-storage storageclass in one of your cluster nodes and that volume space can be used by any pod in your cluster. To create a local-storage storageclass, refer this (https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/)

Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>

I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>
Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but still the same.
Can anyone help me please? What's the trick for making k8s work on GCP?
My manifest taken from here:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
if [ -f ${CLUSTER_CONFIG} ]; then
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
fi
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
clusterIP: None
selector:
app: redis-cluster
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 5
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0-rc
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"]
args:
- --cluster-announce-ip
- "$(POD_IP)"
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: slow
resources:
requests:
storage: 5Gi
Please verify your "StorageClass: slow", it seems there is an indentation problem (starting with reclaimPolicy)
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
#
Update:
Please add --cloud-provider=gce into: kube-apiserver.yaml, kube-controller-manager.yaml, KUBELET_KUBECONFIG_ARGS. You can also enable enable-admission-plugins=DefaultStorageClass
Verify in your "VM instance details" permissiosn in "Cloud API access scopes" permissions.
Verify if your storage class pv and pvc are working properly.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadOnlyMany
storageClassName: slow
resources:
requests:
storage: 1Gi
Google offers two main types of persistent disk, which are provisioned automatically on kubernetes:
Standard storage (labeled pd-standard)
SSD storage (labeled pd-ssd)
By default, GKE will provision standard storage persistent disks. In fact, that’s the only storage class even available at first.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
you can tell a persistent volume to use the new ssd storage class with the following key/value pair: storageClassName: ssd.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ssd-storageclass
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi

Kubernetes Permission denied for mounted nfs volume

The following is the k8s definition used:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: nfs-server
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Now /mnt directory in nfs-busybox should have 2000 as gid(as per docs). But it still have root and root as user and group. Since application is running with 1000/2000 its not able to create any logs or data in /mnt directory.
chmod might solve the issue, but it looks like work around. Is there any permenant solution for this?
Observations: If i replace nfs with some other PVC its working fine as told in docs.
Have you tried initContainers method? It fixes permissions on an exported directory:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /exports"]
volumeMounts:
- name: nfs
mountPath: /exports
If you use a standalone NFS server on Linux box, I suggest using no_root_squash option:
/exports *(rw,no_root_squash,no_subtree_check)
To manage the directory permission on nfs-server, there is a need to change security context and raise it to privileged mode:
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: nfs-server
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true