After running Kompose I get: pod has unbound immediate PersistentVolumeClaims - kubernetes

Whats the Problem?
I can't get my pods running which are using a volume. In the Kubernetes Dashboard I got the following error:
running "VolumeBinding" filter plugin for pod "influxdb-6979bff6f9-hpf89": pod has unbound immediate PersistentVolumeClaims
What did I do?
After running Kompose convert to my docker-compose.yml file I tried to start the pods with micro8ks kubectl apply -f . (I am using micro8ks) I had to replace the version of the networkpolicy yaml files with networking.k8s.io/v1 (see here) but except of this change, I didn't change anything.
YAML Files
influxdb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: influxdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/cloud-net: "true"
io.kompose.network/default: "true"
io.kompose.service: influxdb
spec:
containers:
- env:
- name: INFLUXDB_HTTP_LOG_ENABLED
value: "false"
image: influxdb:1.8
imagePullPolicy: ""
name: influxdb
ports:
- containerPort: 8086
resources: {}
volumeMounts:
- mountPath: /var/lib/influxdb
name: influx
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: influx
persistentVolumeClaim:
claimName: influx
status: {}
influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8086
selector:
io.kompose.service: influxdb
status:
loadBalancer: {}
influx-persistenvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: influx
name: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}

The PersistentVolumeClaim will be unbound if either the cluster does not have a StorageClass which can dynamically provision a PersistentVolume or it does not have a manually created PersistentVolume to satisfy the PersistentVolumeClaim
Here is a guide on how to configure a pod to use PersistentVolume
To solve the current scenario you can manually create a PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Please note usage of hostPath is only as an example. It's not recommended for production usage. Consider using external block or file storage from the supported types here

Related

Running thehive4 as a cluster on Kubernetes

I'm trying to create a cluster of the thehive4 on K8s. From the Documentation here, the deployment is done on servers which is different. There is non official helm chart here, but it is not what I'm looking for.
I created this deployment manifest and successfully created thehive nodes on Kuberenetes, but they are still not acting as a cluster since the application.conf file is not setup correctly.
Has anyone tried this before or has a similar experience with it, please share any documentation or usefull support to finish the setup.
Thank you in advance.
apiVersion: v1
kind: ConfigMap
metadata:
name: thehive-config
data:
application.conf: |
# configuration example using servers not k8s
akka {
cluster.enable = on
actor {
provider = cluster
}
remote.artery {
canonical {
hostname = "<My IP address>"
port = 2551
}
}
# seed node list contains at least one active node
cluster.seed-nodes = [
"akka://[email protected]<Node 1 IP address>:2551",
"akka://[email protected]<Node 2 IP address>:2551",
"akka://[email protected]<Node 3 IP address>:2551"
]
}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: thehive-claim1
name: thehive-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: thehive-claim2
name: thehive-claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: thehive
name: thehive
spec:
replicas: 3
selector:
matchLabels:
io.kompose.service: thehive
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: thehive
spec:
containers:
- args:
- --no-config
- --no-config-secret
image: thehiveproject/thehive4:latest
name: thehive4
ports:
- containerPort: 9000
resources: {}
volumeMounts:
- mountPath: /etc/thehive/application.conf
subPath: application.conf
name: config-volume
- mountPath: /opt/thp/thehive/data
name: thehive-claim1
- mountPath: /opt/thp/thehive/index
name: thehive-claim2
restartPolicy: Always
volumes:
- name: config-volume
configMap:
name: thehive-config
- name: thehive-claim1
persistentVolumeClaim:
claimName: thehive-claim1
- name: thehive-claim2
persistentVolumeClaim:
claimName: thehive-claim2
status: {}

K8s PersistentVolume shared among multiple PersistentVolumeClaims for local testing

Could someone help me please and point me what configuration should I be doing for my use-case?
I'm building a development k8s cluster and one of the steps is to generate security files (private keys) that are generated in a number of pods during deployment (let's say for a simple setup I have 6 pods that each build their own security keys). I need to have access to all these files, also they must be persistent after the pod goes down.
I'm trying to figure out now how to set up it locally for internal testing. From what I understand Local PersistentVolumes only allow 1:1 with PersistentVolumeClaims, so I would have to create a separate PersistentVolume and PersistentVolumeClaim for each pod that get's configured. I would prefer to void this and use one PersistentVolume for all.
Could someone be so nice and help me or point me to the right setup that should be used?
-- Update: 26/11/2020
So this is my setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hlf-nfs--server
spec:
replicas: 1
selector:
matchLabels:
app: hlf-nfs--server
template:
metadata:
labels:
app: hlf-nfs--server
spec:
containers:
- name: hlf-nfs--server
image: itsthenetwork/nfs-server-alpine:12
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: "/opt/k8s-pods/data"
volumeMounts:
- name: pvc
mountPath: /opt/k8s-pods/data
volumes:
- name: pvc
persistentVolumeClaim:
claimName: shared-nfs-pvc
apiVersion: v1
kind: Service
metadata:
name: hlf-nfs--server
labels:
name: hlf-nfs--server
spec:
type: ClusterIP
selector:
app: hlf-nfs--server
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 1Gi
These three are being created at once, after that, I'm reading the IP of the service and adding it to the last one:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-nfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /opt/k8s-pods/data
server: <<-- IP from `kubectl get svc -l name=hlf-nfs--server`
The problem I'm getting and trying to resolve is that the PVC does not get bound with the PV and the deployment keeps in READY mode.
Did I miss anything?
You can create a NFS and have the pods use NFS volume. Here is the manifest file to create such in-cluster NFS server (make sure you modify STORAGE_CLASS and the other variables below):
export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_IMAGE="itsthenetwork/nfs-server-alpine:12"
export STORAGE_CLASS="thin-disk"
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: $STORAGE_CLASS
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF
Below is an example how to point the other pods to this NFS. In particular, refer to the volumes section at the end of the YAML:
export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)
kubectl apply -f - <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
name: apache
labels:
app: apache
spec:
replicas: 2
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
containers:
- name: apache
image: apache
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs:
server: $NFS_IP
path: /
EOF
It is correct that there is a 1:1 relation between a PersistentVolumeClaim and a PersistentVolume.
However, Pods running on the same Node can concurrently mount the same volume, e.g. use the same PersistentVolumeClaim.
If you use Minikube for local development, you only have one node, so you can use the same PersistentVolumeClaim. Since you want to use different files for each app, you could use a unique directory for each app in that shared volume.
So finally, I did it by using a dynamic provider.
I installed the stable/nfs-server-provisioner with helm. With proper configuration, it managed to create a pv named nfs two which my pvc's are able to bound :)
helm install stable/nfs-server-provisioner --name nfs-provisioner -f nfs_provisioner.yaml
the nfs_provisioner.yaml is as follows
persistence:
enabled: true
storageClass: "standard"
size: 20Gi
storageClass:
# Name of the storage class that will be managed by the provisioner
name: nfs
defaultClass: true

K8s deployment error: `selector` does not match template `labels`

I'm trying to migrate from docker-compose to kubernetes. I had some issues with volume, so what I did is:
kompose convert --volumes hostPath
Then I had another issue
no matches for kind "Deployment" in version "extensions/v1beta1"
So I've changed ApiVersion from extensions/v1beta1 to app/v1 and added "selector". Now I can't manage with that issue:
Error from server (Invalid): error when creating "database-deployment.yaml": Deployment.apps "database" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"database"}: `selector` does not match template `labels`
Error from server (Invalid): error when creating "phpmyadmin-deployment.yaml": Deployment.apps "phpmyadmin" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"phpmyadmin"}: `selector` does not match template `labels`
Error from server (Invalid): error when creating "webserver-deployment.yaml": Deployment.apps "webserver" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"io.kompose.service":"webserver"}: `selector` does not match template `labels`
database-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: database
app: database
name: database
spec:
selector:
matchLabels:
app: database
template:
metadata:
labels:
io.kompose.service: database
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: database
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: Bazadanerro
- name: MYSQL_PASSWORD
value: P#$$w0rd
- name: MYSQL_ROOT_PASSWORD
value: P#$$w0rd
- name: MYSQL_USER
value: dockerro
image: mariadb
name: mysql
resources: {}
restartPolicy: Always
status: {}
phpmyadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
app: phpmyadmin
name: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
labels:
io.kompose.service: database
replicas: 1
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
spec:
containers:
- env:
- name: MYSQL_PASSWORD
value: P#$$w0rd
- name: MYSQL_ROOT_PASSWORD
value: P#$$w0rd
- name: MYSQL_USER
value: dockerro
- name: PMA_HOST
value: database
- name: PMA_PORT
value: "3306"
image: phpmyadmin/phpmyadmin
name: phpmyadmins
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
And webserver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: webserver
app: webserverro
name: webserver
spec:
selector:
matchLabels:
app: webserverro
template:
metadata:
labels:
io.kompose.service: webserver
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: webserver
spec:
containers:
- image: webserver
name: webserverro
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /var/www/html
name: webserver-hostpath0
restartPolicy: Always
volumes:
- hostPath:
path: /root/webserverro/root/webserverro
name: webserver-hostpath0
status: {}
What am I doing wrong?
The error is self-explanatory: "selector" does not match template "labels".
Edit your YAML files and set the same key-value pairs in both selector.matchLabels and metadata.labels.
spec:
selector:
matchLabels: # <---- This
app: database
template:
metadata:
labels: # <---- This
io.kompose.service: database
Why selector field is important?
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
Reference
Update:
One possible sample can be:
spec:
selector:
matchLabels:
app.kubernetes.io/name: database
template:
metadata:
labels:
app.kubernetes.io/name: database
Visit recommended labels
Update2:
no matches for kind "Deployment" in version "extensions/v1beta1"
The apiVersion for Deployment object is now apps/v1.
apiVersion: apps/v1 # <-- update here.
kind: Deployment
... ... ...
In all of these files you have two copies of the pod spec template:. These don't get merged; the second one just replaces the first one.
spec:
selector: { ... }
template: # This will get ignored
metadata:
labels:
io.kompose.service: webserver
app: webserverro
template: # and completely replaced with this
metadata:
annotations:
kompose.cmd: kompose convert --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
labels: # without the app: label
io.kompose.service: webserver
spec: { ... }
Remove the first template: block and move the full set of labels into the one template: block that remains.

stateful jupyter notebook in kubernetes

Trying to deploy a stateful jupyter notebook in Kubernetes, but not able to save the code written in a notebook, whenever the notebook pod is going down all the code is being deleted. I tried to use persistent volume but unable to achieve the expected result.
UPDATE
Changed mount path to "/home/jovyan" as jyputer saves the ipynb in this location. But now getting PermissionError: [Errno 13] Permission denied: '/home/jovyan/.local' while deploying the pod.
kind: Ingress
metadata:
name: jupyter-ingress
spec:
backend:
serviceName: jupyter-notebook-service
servicePort: 8888
---
kind: Service
apiVersion: v1
metadata:
name: jupyter-notebook-service
spec:
clusterIP: None
selector:
app: jupyter-notebook
ports:
- protocol: TCP
port: 8888
targetPort: 8888
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jupyter-notebook
labels:
app: jupyter-notebook
spec:
replicas: 1
serviceName: "jupyter-notebook-service"
selector:
matchLabels:
app: jupyter-notebook
template:
metadata:
labels:
app: jupyter-notebook
spec:
serviceAccountName: dsx-spark
volumes:
- name: jupyter-pv-storage
persistentVolumeClaim:
claimName: jupyter-pv-claim
containers:
- name: minimal-notebook
image: jupyter/pyspark-notebook:latest
ports:
- containerPort: 8888
command: ["start-notebook.sh"]
args: ["--NotebookApp.token=''"]
volumeMounts:
- mountPath: "/home/jovyan"
name: jupyter-pv-storage
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jupyter-pv-claim
spec:
storageClassName: jupyter-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jupyter-pv-volume
labels:
type: local
spec:
storageClassName: jupyter-pv-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/jovyan"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: jupyternotebook-pv-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
```
The Pod with jupyter notebook is non-root user so unable to mount the so we are
using initContainers to change user/permission of the Persistent Volume Claim before creating the Pod.
kind: StatefulSet
metadata:
name: jupyter-notebook
labels:
app: jupyter-notebook
spec:
replicas: 1
serviceName: "jupyter-notebook-service"
selector:
matchLabels:
app: jupyter-notebook
template:
metadata:
labels:
app: jupyter-notebook
spec:
serviceAccountName: dsx-spark
volumes:
- name: ci-jupyter-storage-def
persistentVolumeClaim:
claimName: my-jupyter-pv-claim
containers:
- name: minimal-notebook
image: jupyter/pyspark-notebook:latest
ports:
- containerPort: 8888
command: ["start-notebook.sh"]
args: ["--NotebookApp.token=''"]
volumeMounts:
- mountPath: "/home/jovyan/work"
name: ci-jupyter-storage-def
initContainers:
- name: jupyter-data-permission-fix
image: busybox
command: ["/bin/chmod","-R","777", "/data"]
volumeMounts:
- name: ci-jupyter-storage-def
mountPath: /data```
As I have already mentioned in the comments you need to make sure that:
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin. The volumeClaimTemplates will provide stable storage using PersistentVolumes. PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted.
The container is running as a user that has the permissions to access that volume. It can be done by changing the permissions to 777 or as you already noticed by using a proper initContainers.

Error: `selector` does not match template `labels` after adding selector

I am trying to deploy this kubernetes deployment; however, when ever I do: kubectl apply -f es-deployment.yaml it throws the error: Error: `selector` does not match template `labels
I have already tried to add the selector, matchLabels under the specs section but it seems like that did not work. Below is my yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
name: elasticsearchconnector
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
spec:
selector:
matchLabels:
app: elasticsearchconnector
containers:
- env:
- [env stuff]
image: confluentinc/cp-kafka-connect:latest
name: elasticsearchconnector
ports:
- containerPort: 28082
resources: {}
volumeMounts:
- mountPath: /etc/kafka-connect
name: elasticsearchconnector-hostpath0
- mountPath: /etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- mountPath: /etc/kafka
name: elasticsearchconnector-hostpath2
restartPolicy: Always
volumes:
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect
name: elasticsearchconnector-hostpath0
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak
name: elasticsearchconnector-hostpath2
status: {}
Your labels and selectors are misplaced.
First, you need to specify which pods the deployment will control:
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearchconnector
Then you need to label the pod properly:
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
app: elasticsearchconnector
spec:
containers: