Dynamic Persistent Volumes with Kuberenetes Heketi not working - kubernetes

I deployed heketi/gluster on Kubernetes 1.6 cluster. Then I followed the guide to create a StorageClass for dynamic persistent volumes, but no pv are created if I create pvc.
heketi and glusterfs pods running and working as expected if I use heketi-cli manually and create pv manually. The pv are also claimed by pvc.
I feels like that I'm missing a step, but I don't know which one. I followed the guides and I assumed that dynamic persistent volumes should "just work".
install heketi-cli and glusterfs-client
use ./gk-deploy -g
create StorageClass
create PVC
Did I missed a step?
StorageClass
$ kubectl get storageclasses
NAME TYPE
slow kubernetes.io/glusterfs
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2017-06-07T06:54:35Z
name: slow
resourceVersion: "82741"
selfLink: /apis/storage.k8s.io/v1/storageclassesslow
uid: 2aab0a5c-4b4e-11e7-9ee4-001a4a3f1eb3
parameters:
restauthenabled: "false"
resturl: http://10.2.35.3:8080/
restuser: ""
restuserkey: ""
provisioner: kubernetes.io/glusterfs
PVC
$ kubectl -nkube-system get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
gluster1 Bound glusterfs-b427d1f1 1Gi RWO 15m
influxdb-persistent-storage Pending slow 14h
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"slow"},"labels":{"k8s-app":"influxGrafana"},"name":"influxdb-persistent-storage","namespace":"kube-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}
volume.beta.kubernetes.io/storage-class: slow
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: 2017-06-06T16:48:46Z
labels:
k8s-app: influxGrafana
name: influxdb-persistent-storage
namespace: kube-system
resourceVersion: "87638"
selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/influxdb-persistent-storage
uid: 021b69c4-4ad8-11e7-9ee4-001a4a3f1eb3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status:
phase: Pending
Sources:
https://github.com/gluster/gluster-kubernetes
http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/
Environment:
$ kubectl cluster-info
Kubernetes master is running at https://andrea-master-0.muellerpublic.de:443
KubeDNS is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://andrea-master-0.muellerpublic.de:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ heketi-cli cluster list
Clusters:
24dca142f655fb698e523970b33238a9
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4+coreos.0", GitCommit:"8996efde382d88f0baef1f015ae801488fcad8c4", GitTreeState:"clean", BuildDate:"2017-05-19T21:11:20Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

The problem was the slash in the StorageClass resturl.
resturl: http://10.2.35.3:8080/ must be resturl: http://10.2.35.3:8080
PS: o.O ....

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

Cannot mount glusterfs volume on GKE

I have deployed a GlusterFS cluster on GCE, three nodes, one volume and one brick. I need to mount it on a pod deployed on GKE. I have successfully created the endpoint and PV, but I cannot create the PVC, If I introduce the volumeName refered to my PV, I get the following error:
2s Warning VolumeMismatch persistentvolumeclaim/my-glusterfs-pvc Cannot bind to requested volume "my-glusterfs-pv": storageClassName does not match
If I don't introduce a volumeName selector I get the following error:
3s Warning ProvisioningFailed persistentvolumeclaim/my-glusterfs-pvc Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
I though that glusterfs has native support on GKE, however I cannot find on the GCP site:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#storage_driver_support
Is it related to the errors I'm getting while deploying the PVC? How could I solve it?
Thanks!
I have not defined any storageClass. I can only find Heketi provisioned GlusterFS storageClass on Kubernetes documentation. My YAMLs:
kind: PersistentVolume
metadata:
name: my-glusterfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
glusterfs:
path: mypath
endpoints: my-glusterfs-cluster
readOnly: false
persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-glusterfs-pv
I have deployed Heketi on my glusterfs cluster and it works correctly. However I cannot deploy the pvc on GKE. Glusterfs-client is running ok, the heketi secret is available, the gluster storageclass has been created without errors, but the PVC remains in Pending an I have found this event:
17m Warning ProvisioningFailed persistentvolumeclaim/liferay-glusterfs-pvc Failed to provision volume with StorageClass "gluster-heketi-external": failed to create volume: failed to create volume: see kube-controller-manager.log for details
Storageclass:
kind: StorageClass
#apiVersion: storage.k8s.io/v1beta1
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-heketi-external
provisioner: kubernetes.io/glusterfs
parameters:
restauthenabled: "true"
resturl: "http://my-heketi-node-ip:8080"
clusterid: "idididididididididid"
restuser: "myadminuser"
secretName: "heketi-secret"
secretNamespace: "my-app"
volumetype: "replicate:3"
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi-external
# finalizers:
# - kubernetes.io/pvc-protection
name: my-app-pvc
namespace: my-app
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
I also have created the firewall rule needed for Heketi, in order GKE nodes can access to Heketi resturl and port. I have checked that it´s working ok launching a telnet to 8080 port while I am sniffing the traffic on the Heketi node. Everything appears to be ok, I can see the answer and the packets. However I cannot see any packet when I deploy the pvc. There is nothing on syslog. I can create volumes when the heketi command is launched locally.
I have tried to curl Heketi node from a gluster-client pod and it works correctly:
kubectl exec -it gluster-client-6fqmb -- curl http://[myheketinodeip]:8080/hello
Hello from Heketi

WaitForFirstConsumer PersistentVolumeClaim waiting for first consumer to be created before binding

I setup a new k8s in a single node, which is tainted. But the PersistentVolume can not be created successfully, when I am trying to create a simple PostgreSQL.
There is some detail information below.
The StorageClass is copied from the official page: https://kubernetes.io/docs/concepts/storage/storage-classes/#local
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
The StatefulSet is:
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
...
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
About the running StorageClass:
$ kubectl describe storageclasses.storage.k8s.io
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
About the running PersistentVolumeClaim:
$ kubectl describe pvc
Name: postgres-data-postgres-0
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=postgres
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer <invalid> (x2 over <invalid>) persistentvolume-controller waiting for first consumer to be created before binding
K8s versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
The app is waiting for the Pod, while the Pod is waiting for a PersistentVolume by a PersistentVolumeClaim.
However, the PersistentVolume should be prepared by the user before using.
My previous YAMLs are lack of a PersistentVolume like this:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-data
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
local:
path: /data/postgres
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteOnce
storageClassName: local-storage
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- postgres
The local path /data/postgres should be prepared before using.
Kubernetes will not create it automatically.
I just ran into this myself and was completely thrown for a loop until I realized that the StorageClass's VolumeBindingMode was set to WaitForFirstConsumer vice my intended value of Immediate. This value is immutable so you will have to:
Get the storage class yaml:
kubectl get storageclasses.storage.k8s.io gp2 -o yaml > gp2.yaml
or you can also just copy the example from the docs here (make sure the metadata names match). Here is what I have configured:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
And delete the old StorageClass before recreating it with the new volumeBindingMode set to Immediate.
Note: The EKS clsuter may need perms to create cloud resources like EBS or EFS. Assuming EBS you should be good with arn:aws:iam::aws:policy/AmazonEKSClusterPolicy.
After doing this you should have no problem creating and using dynamically provisioned PVs.
In my case, I had claimRef without specified namespace.
Correct syntax is:
claimRef:
namespace: default
name: my-claim
StatefulSet also prevented initialization, I had to replace it with a deployment
This was a f5g headache
For me the problem was mismatched accessModes fields in the PV and PVC. PVC was requesting RWX/ReadWriteMany while PV was offering RWO/ReadWriteOnce.
The accepted answer didn't work for me. I think it's because the app key won't be set before the the StatefulSet's Pods are deployed, preventing the PersistentVolumeClaim to match the nodeSelector (preventing the Pods to start with the error didn't find available persistent volumes to bind.). To fix this deadlock, I defined one PersistentVolume for each node (this may not be ideal but it worked):
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-data-node1
labels:
type: local
spec:
[…]
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
I'm stuck in this vicious loop myself.
I'm trying to create a kubegres cluster (which relies on dynamic provisioning as per my understanding).
I'm using RKE on a local-servers-like setup.
and I have the same scheduling issue as the one initially mentioned.
noting that the accessmode of the PVC (created by kubegres) is set to nothing as per the below output.
[rke#rke-1 manifests]$ kubectl get pv,PVC
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/local-vol 20Gi RWO Delete Available local-storage 40s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/local-vol-mypostgres-1-0 Pending local-storage 6m42s
persistentvolumeclaim/postgres-db-mypostgres-1-0 Pending local-storage 6m42s
As an update, the issue in my case was that the PVC was not finding a Proper PV which was supposed to be dynamically provisioned. But for local storage classes, this feature is not yet supported therefore I had to use a third-party solution which solved my issue.
https://github.com/rancher/local-path-provisioner
This issue mainly happens with WaitForFirstConsumer when you define the nodeName in the Deployment/Pod specifications. Please make sure you don't define nodeName and hardbind the pod through it. The should be resolved once you remove nodeName.
I believe this can be a valid message that means that there are no containers started that have volumes that are bound to the persistent volume claim.
I experienced this issue on rancher desktop. It turned out the problem was caused by rancher not running properly after a macOS upgrade. The containers were not starting and would stay in a pending state.
After reseting the rancher desktop (using the UI), the containers were able to start well and the message disappeared.
waitforfirstconsumer-persistentvolumeclaim i.e. POD which requires this PVC is not scheduled. describe pods may give some more clue. In my case node was not able to schedule this POD since pod limit in node was 110 and deployment was exceeding it. Hope it helps to identify issue faster. increased the pod limit , restart kubelet in node solves it.

NFS Persistent Volume Claim remains pending indefinitely

I get the following error when creating a PVC and I have no idea what it means.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalExpanding 1m (x3 over 5m) volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external
controller to process this PVC.
My PV for it is there and seems to be fine.
Here is the spec for my PV and PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
class: projects-service-nfs
company: mulesoft
component: projects
component-asset: projects-service
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs
selfLink: /api/v1/persistentvolumes/projects-service-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: projects-service-nfs-block
namespace: design-center
resourceVersion: "7932052"
uid: d114dd38-f411-11e8-b7b1-1230f683f84a
mountOptions:
- nfsvers=3
- hard
- sync
nfs:
path: /
server: 1.1.1.1
persistentVolumeReclaimPolicy: Retain
volumeMode: Block
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
app: projects-service
app-guid: design-center-projects-service
asset: service
chart: design-center-projects-service
company: mulesoft
component: projects
component-asset: projects-service
example: test
heritage: Tiller
product: design-center
product-component: design-center-projects
release: design-center-projects-service
name: projects-service-nfs-block
selfLink: /api/v1/namespaces/design-center/persistentvolumeclaims/projects-service-nfs-block
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 20Gi
selector:
matchLabels:
class: projects-service-nfs
storageClassName: ""
volumeMode: Block
volumeName: projects-service-nfs
Version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Looks like at some point you updated/expanded the PVC? which is calling:
func (expc *expandController) pvcUpdate(oldObj, newObj interface{})
...
The in the function it's trying to find a plugin for expansion and it can't find it with this:
volumePlugin, err := expc.volumePluginMgr.FindExpandablePluginBySpec(volumeSpec)
if err != nil || volumePlugin == nil {
err = fmt.Errorf("didn't find a plugin capable of expanding the volume; " +
"waiting for an external controller to process this PVC")
...
return
}
If you see this document it shows that the following volume types support PVC expansion with in-tree plugins: AWS-EBS, GCE-PD, Azure Disk, Azure File, Glusterfs, Cinder, Portworx, and Ceph RBD. So NFS is not one of them and that's why you are seeing the event. Looks like it could be supported in the future or also it could be supported with a custom plugin.
If you haven't updated the PVC I would recommend using the same capacity for the PV and PVC as described here

Kubernetes PersistentVolumeClaim issues in AWS-using volumeClaimTemplates pending state

We have success creating the pods, services and replication controllers according to our project requirements. Now we are planning to setup persistence storage in AWS using Kubernetes. I have created the YAML file to create an EBS volume in AWS, it's working fine as expected. I am able to claim volume and successfully mount to my pod (this is for single replica only).
I am able to successfully create the file.Volume also creating but my Pods is going to pending state, volume still shows available state in aws. I am not able to see any error logs over there.
Storage file:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: mongo-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
Main file:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web2
spec:
selector:
matchLabels:
app: mongodb
serviceName: "mongodb"
replicas: 2
template:
metadata:
labels:
app: mongodb
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- image: mongo
name: mongodb
ports:
- name: web2
containerPort: 27017
hostPort: 27017
volumeMounts:
- mountPath: "/opt/couchbase/var"
name: mypd1
volumeClaimTemplates:
- metadata:
name: mypd1
annotations:
volume.alpha.kubernetes.io/storage-class: mongo-ssd
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
I can see you have used hostPort in your container. In this case, If you do not have more than one node in your cluster, One Pod will remain pending. Because It will not fit any node.
containers:
- image: mongo
name: mongodb
ports:
- name: web2
containerPort: 27017
hostPort: 27017
I am getting this error when I describe my pending Pod
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s (x7 over 58s) default-scheduler No nodes are available that match all of the predicates: PodFitsHostPorts (1).
HostPort in your container will be bind with your node. Suppose, you are using HostPort 10733, but another pod is already using that port, now you pod can't use that. So it will be in pending state. And If you have replica 2, and both pod is deployed in same node, they can't be started either.
So, you need to use a port as HostPort, that you can surely say that no one else is using.