What is wrong with below.
# config for es data node
apiVersion: v1
kind: ConfigMap
metadata:
namespace: infra
name: elasticsearch-data-config
labels:
app: elasticsearch
role: data
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: false
data: true
ingest: false
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
---
# service for es data node
apiVersion: v1
kind: Service
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
selector:
app: elasticsearch
role: data
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
statefull output:
Name: elasticsearch-data-0
Namespace: infra
Priority: 0
Node: <none>
Labels: app=elasticsearch-data
controller-revision-hash=elasticsearch-data-76bdf989b6
role=data
statefulset.kubernetes.io/pod-name=elasticsearch-data-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/elasticsearch-data
Init Containers:
increase-vm-max-map:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Containers:
elasticsearch-data:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
Port: 9300/TCP
Host Port: 0/TCP
Environment:
CLUSTER_NAME: elasticsearch
NODE_NAME: elasticsearch-data
NODE_LIST: elasticsearch-master,elasticsearch-data,elasticsearch-client
MASTER_NODES: elasticsearch-master
ES_JAVA_OPTS: -Xms300m -Xmx300m
Mounts:
/data/db from elasticsearch-data-persistent-storage (rw)
/usr/share/elasticsearch/config/elasticsearch.yml from config (ro,path="elasticsearch.yml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-persistent-storage-elasticsearch-data-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: elasticsearch-data-config
Optional: false
default-token-9nhmg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9nhmg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 46s (x4 over 3m31s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
kubectl get sc
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 5d19h
kubectl get pv
No resources found in infra namespace.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
It looks like there is some issue with your PVC.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
As you can see your PV is also not created.I think there is an issue with your storage class.Looks like gp2 storage class is not available in your cluster.
Either run this yaml file if you are in AWS EKS
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
or simply change the storage class to standard in GCP GKE
From the docs here
The storage for a given Pod must either be provisioned by a
PersistentVolume Provisioner based on the requested storage class, or
pre-provisioned by an admin.
There should be a StorageClass which can dynamically provision the PV and mention that storageClassName in the volumeClaimTemplates or there needs to be a PV which can satisfy the PVC.
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 10Gi
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
This worked for me. Like Avinash said I simply changed the storage class to standard in GCP GKE
Related
When trying to bind a pod to a NFS persistentVolume hosted on another pod, it fails to mount when using docker-desktop. It works perfectly fine elsewhere even with the exact same YAML.
The error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m59s default-scheduler Successfully assigned test-project/test-digit-5576c79688-zfg8z to docker-desktop
Warning FailedMount 2m56s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[lagg-connection kube-api-access-h68w7]: timed out waiting for the condition
Warning FailedMount 37s kubelet Unable to attach or mount volumes: unmounted volumes=[lagg-connection], unattached volumes=[kube-api-access-h68w7 lagg-connection]: timed out waiting for the condition
The minified project which you can apply to test yourself:
apiVersion: v1
kind: Namespace
metadata:
name: test-project
labels:
name: test-project
---
apiVersion: v1
kind: Service
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
clusterIP: 10.96.13.37
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
app: nfs-server
environment: test
scope: backend
---
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
environment: test
name: test-lagg-volume
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
nfs:
path: /
server: 10.96.13.37
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
environment: test
name: test-lagg-claim
namespace: test-project
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: static
environment: test
scope: backend
name: test-digit
namespace: test-project
spec:
selector:
matchLabels:
app: static
environment: test
scope: backend
template:
metadata:
labels:
app: static
environment: test
scope: backend
spec:
containers:
- image: busybox
name: digit
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
volumeMounts:
- mountPath: /cache
name: lagg-connection
volumes:
- name: lagg-connection
persistentVolumeClaim:
claimName: test-lagg-claim
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
environment: test
name: test-lagg
namespace: test-project
spec:
selector:
matchLabels:
app: nfs-server
environment: test
scope: backend
template:
metadata:
labels:
app: nfs-server
environment: test
scope: backend
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
name: lagg
ports:
- containerPort: 2049
name: lagg
- containerPort: 20048
name: mountd
- containerPort: 111
name: rpcbind
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: lagg-claim
volumes:
- emptyDir: {}
name: lagg-claim
As well as emptyDir I have also tried hostPath. This setup has worked before, and I'm not sure what I've changed if anything since it has stopped.
Updating my Docker for Windows installation from 4.0.1 to 4.1.1 has fixed this problem.
I have a Redis pod on my Kubernetes cluster on Google Cloud. I have built PV and the claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: redis-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: my-size
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: redis-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: my size
I also mounted it in my deployment.yaml
volumeMounts:
- mountPath: /data
name: redis-pv-claim
volumes:
- name: redis-pv-claim
persistentVolumeClaim:
claimName: redis-pv-claim
I can't see any error while running describe pod
Volumes:
redis-pv-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-pv-claim
ReadOnly: false
But it just can't save any key. After every deployment, the "/data" folder is just empty.
My NFS is active now but i still cant keep data .
Describe pvc
Namespace: my namespace
StorageClass: nfs-client
Status: Bound
Volume: pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: nfs-client
volume.beta.kubernetes.io/storage-provisioner: cluster.local/ext1-nfs-client-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: my grafana pod
Events: <none>
Describe pod gives me an error though.
Warning FailedMount 18m kubelet, gke-devcluster-pool-1-36e6a393-rg7d MountVolume.SetUp failed for volume "pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3" : mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 --scope -- /
home/kubernetes/containerized_mounter/mounter mount -t nfs 192.168.1.21:/mnt/nfs/development-test-claim-pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3 /var/lib/kubelet/pods/8f7b6630-ed9b-427a-9ada-b75e1805ed60
/volumes/kubernetes.io~nfs/pvc-5d278b27-a51e-4262-8c1b-68b290b21fc3
Output: Running scope as unit: run-ra5925a8488ef436897bd44d526c57841.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Working redis with PV and PVC on GKE
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: LoadBalancer
ports:
- port: 6379
name: redis
selector:
app: redis
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
serviceName: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redislabs/rejson
args: ["--requirepass", "password", "--appendonly", "no", "--loadmodule", "/usr/lib/redis/modules/rejson.so"]
ports:
- containerPort: 6379
name: redis
resources:
limits:
cpu: .50
memory: 1500Mi
requests:
cpu: .25
memory: 1000Mi
volumeMounts:
- name: redis-volume
mountPath: /data
volumeClaimTemplates:
- metadata:
name: redis-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
you can update image in this stateful sets as per need.
I am trying to run kafka with kubeless but I get this error pod has unbound immediate PersistentVolumeClaims. I have created a persistent volume using rook and ceph and trying to use this perisistent volume with kubeless kafka. However when I run the code I get "pod has unbound persistent volume claims"
What am I doing wrong here?
Persistent Volument for Kafka
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: datadir
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Persistent Volume for zookeper
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Kubeless Kafka
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: datadir
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumeClaimTemplates:
- metadata:
name: zookeeper
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
ERROR
vagrant#ubuntu-xenial:~/infra/ansible/scripts/kubeless-kafka-trigger$ kubectl get pod -n kubeless
NAME READY STATUS RESTARTS AGE
kafka-0 0/1 Pending 0 8m44s
kafka-trigger-controller-7cbd54b458-pccpn 1/1 Running 0 8m47s
kubeless-controller-manager-5bcb6757d9-nlksd 3/3 Running 0 3h34m
zoo-0 0/1 Pending 0 8m42s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times
kubectl describe pod kafka-0 -n kubeless
Name: kafka-0
Namespace: kubeless
Priority: 0
Node: <none>
Labels: controller-revision-hash=kafka-c498d7f6
kubeless=kafka
statefulset.kubernetes.io/pod-name=kafka-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/kafka
Init Containers:
volume-permissions:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sh
-c
chmod -R g+rwX /bitnami
Environment: <none>
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Containers:
broker:
Image: bitnami/kafka:1.1.0-r0
Port: 9092/TCP
Host Port: 0/TCP
Liveness: tcp-socket :9092 delay=30s timeout=1s period=10s #success=1 #failure=3
Environment:
KAFKA_ADVERTISED_HOST_NAME: broker.kubeless
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_DELETE_TOPIC_ENABLE: true
KAFKA_ZOOKEEPER_CONNECT: zookeeper.kubeless:2181
ALLOW_PLAINTEXT_LISTENER: yes
Mounts:
/bitnami/kafka/data from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wj8vx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-kafka-0
ReadOnly: false
default-token-wj8vx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wj8vx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 45s (x10 over 10m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
I got it working.. For someone who faces the same problem this would be useful..
This uses rook-ceph storage kubeless kafka
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: kafka
namespace: kubeless
labels:
kubeless: kafka
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper
namespace: kubeless
labels:
kubeless: zookeeper
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: kafka
namespace: kubeless
spec:
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: v1
kind: Service
metadata:
name: zoo
namespace: kubeless
spec:
clusterIP: None
ports:
- name: peer
port: 9092
- name: leader-election
port: 3888
selector:
kubeless: zookeeper
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
kubeless: kafka-trigger-controller
name: kafka-trigger-controller
namespace: kubeless
spec:
selector:
matchLabels:
kubeless: kafka-trigger-controller
template:
metadata:
labels:
kubeless: kafka-trigger-controller
spec:
containers:
- env:
- name: KUBELESS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBELESS_CONFIG
value: kubeless-config
image: kubeless/kafka-trigger-controller:v1.0.2
imagePullPolicy: IfNotPresent
name: kafka-trigger-controller
serviceAccountName: controller-acct
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kafka-controller-deployer
rules:
- apiGroups:
- ""
resources:
- services
- configmaps
verbs:
- get
- list
- apiGroups:
- kubeless.io
resources:
- functions
- kafkatriggers
verbs:
- get
- list
- watch
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kafka-controller-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kafka-controller-deployer
subjects:
- kind: ServiceAccount
name: controller-acct
namespace: kubeless
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kafkatriggers.kubeless.io
spec:
group: kubeless.io
names:
kind: KafkaTrigger
plural: kafkatriggers
singular: kafkatrigger
scope: Namespaced
version: v1beta1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
namespace: kubeless
spec:
serviceName: broker
template:
metadata:
labels:
kubeless: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: broker.kubeless
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper.kubeless:2181
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
image: bitnami/kafka:1.1.0-r0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 9092
name: broker
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/kafka/data
name: kafka
volumes:
- name: kafka
persistentVolumeClaim:
claimName: kafka
---
apiVersion: v1
kind: Service
metadata:
name: broker
namespace: kubeless
spec:
clusterIP: None
ports:
- port: 9092
selector:
kubeless: kafka
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zoo
namespace: kubeless
spec:
serviceName: zoo
template:
metadata:
labels:
kubeless: zookeeper
spec:
containers:
- env:
- name: ZOO_SERVERS
value: server.1=zoo-0.zoo:2888:3888:participant
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: bitnami/zookeeper:3.4.10-r12
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: peer
- containerPort: 3888
name: leader-election
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
initContainers:
- command:
- sh
- -c
- chmod -R g+rwX /bitnami
image: busybox
imagePullPolicy: IfNotPresent
name: volume-permissions
volumeMounts:
- mountPath: /bitnami/zookeeper
name: zookeeper
volumes:
- name: zookeeper
persistentVolumeClaim:
claimName: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: kubeless
spec:
ports:
- name: client
port: 2181
selector:
kubeless: zookeeper
Got the same error in my minikube. Forgot to create volumes for my statefulSets.
Created PVC. Need to pay attention to storageClassName, check througt availiable (i did it at dashboard).
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "XXXX",
"namespace": "kube-public",
"labels": {
"kubeless": "XXXX"
}
},
"spec": {
"storageClassName": "hostpath",
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
I got persistence volumes.
Then i edited statefulSet:
"volumes": [
{
"name": "XXX",
"persistentVolumeClaim": {
"claimName": "XXX"
}
}
Added "persistentVolumeClaim" attribute, dropped pod, waited until new pod created.
I have deleted my elasticsearch cluster, but now after I've deployed a new cluster I need to access the old data that was stored on 3 Persistent Volumes PV described bellow:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
storage-es-data-0 Bound pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-1 Bound pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-2 Bound pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
This is the description of the old PV claims:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-0 standard 10d
pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-1 standard 10d
pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-2 standard 10d
My new deployment is described as follow:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: storage-es-data-0
After connecting my pod with a Loadblance service, I didn't find any documents. Am I missing something? And how can I use the three PV in the same POD.
Your deployment yaml file is correct. You should be able to find files from pvc-19429b0b-ba42-11e7-979d-42010a840ff7 volume inside /data folder in your pod.
In order to use three PV in the same POD just add them to your deployment yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage-0
mountPath: /data0
- name: storage-1
mountPath: /data1
- name: storage-2
mountPath: /data2
volumes:
- name: storage-0
persistentVolumeClaim:
claimName: storage-es-data-0
- name: storage-1
persistentVolumeClaim:
claimName: storage-es-data-1
- name: storage-2
persistentVolumeClaim:
claimName: storage-es-data-2
I'm not sure why the persistent volume is not being claimed, or what steps I could take to further diagnose this?
Should the claim size match the volume size? Should the volume size match the GCP volume size?
This is so difficult to test and figure out...
My goal here is just to be able to create a Wordpress instance with even a single replica as long as it would support rolling deployments....
Output of kubectl get pods:
NAME READY STATUS RESTARTS AGE
wordpress-1546832918-mz4rt 0/3 Pending 0 47m
wordpress-1546832918-p0s1s 0/3 Pending 0 47m
Output of kubectl describe pods:
...truncated...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
47m 3s 168 default-scheduler Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected.]
Output of kubectl get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Pending manual 4h
Output of kubectl get pv:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 10Gi RWX Retain Available manual 4h
production.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: eu.gcr.io/abcxyz/wordpress:deploy-1502807720
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: eu.gcr.io/abcxyz/nginx:deploy-1502807720
name: nginx
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: nginx
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
readOnly: true
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: "task-pv-claim"
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
pVolume.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
spec:
storageClassName: manual
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteMany"
gcePersistentDisk:
fsType: "ext4"
pdName: "wordpress-disk"
pVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The spec.accessModes of your persistent volume claim has to match that in the persistent volume. Try change both of them to the same value.
If that didn't work, you can add the spec.selector definition to your persistent volume claim definition, by updating it to match your persistent volume metadata.labels like this:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
labels:
name: "pv0001" # can be anything as long as it matches the selector in the pvc
spec:
...
----
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
name: "pv0001"
The spec.selector serves as a filter to ensure that only PV with the specified labels are matched.