Remove nodeSelectorTerms param in manifest deployment - kubernetes

I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
I manually created directories on every node under /opt/certs and /opt/registry.
But when I try to deploy the manifest without hardcoded nodeSelectorTerms on tha control plane I get error:
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes#kubernetes1:/opt/registry$
Do you know how I can let Kubernetes to decide where to deploy the pod?

It seems like your node has taints hence pods are not getting scheduled. Can you try using this command to remove taints from your node ?
kubectl taint nodes <node-name> node-role.kubernetes.io/master-
or
kubectl taint nodes --all node-role.kubernetes.io/master-
To get the node name use kubectl get nodes
User was able to get the pod scheduled after running below command:
kubectl taint nodes kubernetes1 node-role.kubernetes.io/control-plane:NoSchedule-
Now pod is failing due to crashloopbackoff this implies the pod has been scheduled.
Can you please check if this pod is getting scheduled and running properly ?
apiVersion: v1
kind: Pod
metadata:
name: nginx1
namespace: test
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "350m"

Related

Remove nodeSelectorTerms param

I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
I manually created directories on every node under /opt/certs and /opt/registry.
But when I try to deploy the manifest without hardcoded nodeSelectorTerms on tha control plane I get error:
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes#kubernetes1:/opt/registry$
Do you know how I can let Kubernetes to decide where to deploy the pod?
Lets try the following(disregard the paths you currently have and use the ones in the example, (then you can change it), we can adapt it to your needs once dynamic provisioning is working, at the very bottom theres mysql image as an example, use busybox or leave it as it is to get a better understanding:
NFS Server install. Create NFS Share on File Server (Usually master node)
#Include prerequisites
sudo apt update -y # Run updates prior to installing
sudo apt install nfs-kernel-server # Install NFS Server
sudo systemctl enable nfs-server # Set nfs-server to load on startups
sudo systemctl status nfs-server # Check its status
# check server status
root#worker03:/home/brucelee# sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 2732 (code=exited, status=0/SUCCESS)
Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.
# Prepare an empty folder
sudo su # enter root
nfsShare=/nfs-share
mkdir $nfsShare # create folder if it doesn't exist
chown nobody: $nfsShare
chmod -R 777 $nfsShare # not recommended for production
# Edit the nfs server share configs
vim /etc/exports
# add these lines
/nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
# Export directory and make it available
sudo exportfs -rav
# Verify nfs shares
sudo exportfs -v
# Enable ingress for subnet
sudo ufw allow from x.x.x.x/24 to any port nfs
# Check firewall status - inactive firewall is fine for testing
root#worker03:/home/brucelee# sudo ufw status
Status: inactive
NFS Client install (Worker nodes)
# Install prerequisites
sudo apt update -y
sudo apt install nfs-common
# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount
# Unmount
sudo umount $localMount
Dinamic provisioning and Storage class defaulted
# Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy
# Deploying the service accounts, accepting defaults
k create -f rbac.yaml
# Editing storage class
vim class.yaml
##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################
# Deploying storage class
k create -f class.yaml
# Sample output
stoic#masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-ssd k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 33s
nfs-class kubernetes.io/nfs Retain Immediate true 193d
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12d
# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default
# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml
##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: X.X.X.X # change this value
- name: NFS_PATH
value: /nfs-share # change this value
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.93 # change this value
path: /nfs-share # change this value
##############################################
# Creating nfs provisioning service pod
k create -f deployment.yaml
# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic#masternode: $ k describe deployments.apps nfs-client-provisioner
Name: nfs-client-provisioner
Namespace: default
CreationTimestamp: Sat, 14 Aug 2021 00:09:24 +0000
Labels: app=nfs-client-provisioner
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nfs-client-provisioner
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=nfs-client-provisioner
Service Account: nfs-client-provisioner
Containers:
nfs-client-provisioner:
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Port: <none>
Host Port: <none>
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: X.X.X.X
NFS_PATH: /nfs-share
Mounts:
/persistentvolumes from nfs-client-root (rw)
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: X.X.X.X
Path: /nfs-share
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetCreated
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
OldReplicaSets: <none>
NewReplicaSet: nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m47s deployment-controller Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1
# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(#.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{#.metadata.name}{"\n"}{end}')
PersistentVolumeClaim (Notice the storageClassName it is the one defined on the previous step)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-persistentvolume-claim
namespace: default
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
PersistentVolume
It is created dinamically ! confirm if it is here with the correct values running this command:
kubectl get pv -A
Deployment
On your deployment you need two things, volumeMounts (for each container) and volumes (for all containers).
Notice: VolumeMounts->name=data and volumes->name=data because they should match. And claimName is my-persistentvolume-claim which is the same as you PVC.
...
spec:
containers:
- name: mysql
image: mysql:8.0.30
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: data
persistentVolumeClaim:
claimName: my-persistentvolume-claim

Cannot create private Kubernetes registry

I want to create a private Kubernetes registry from this tutorial: https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/
I implemented this:
Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
Copy-paste private-registry.yaml into /opt/registry
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes#kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes#kubernetes1:/opt/registry$
I have the following questions:
I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under /opt/registry and deploy images on all work nodes without using shared folders?
As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under /opt/registry and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into /opt/registry to be synchronized automatically by Kubernetes.
Do you know how I can debug this configuration? As you can see pod is not starting.
EDIT: Log file:
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes#kubernetes1:/opt/registry$
Attempt 2:
I tried this configuration deployed from control plane:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
Note! control plane hostname is kubernetes1 so I changed the value into above configuration. I get this:
kubernetes#kubernetes1:~$ cd /opt/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes#kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
Unfortunately again the image is not created.
For 1st question, you can try creating PersistentVolume with node affinity set to specific controlplane node and tie it with the deployment via PersistentVolumeClaim.Here's an example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes.io/hostname
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
For question # 2, can you share the logs of your pod?
You can try with following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteMany

Pod is in pending stat

I have made a custom scheduler like below:
root#kmaster:~# cat /etc/kubernetes/manifests/my-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=false
- --port=10261
- --secure-port=10269
env:
- name: no_proxy
value: ',10.74.46.2,10.74.46.3,10.74.46.4'
- name: NO_PROXY
value: ',10.74.46.2,10.74.46.3,10.74.46.4'
- name: HTTPS_PROXY
value: http://127.0.0.1:3129
- name: HTTP_PROXY
value: http://127.0.0.1:3129
image: k8s.gcr.io/kube-scheduler:v1.22.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10269
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: my-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10269
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
root#kmaster:~#
My customer scheduler pod is running successfully:
root#kmaster:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-6dcft 1/1 Running 10 (2d19h ago) 47d
coredns-78fcd69978-8224d 1/1 Running 10 (2d19h ago) 47d
etcd-kmaster 1/1 Running 4 (21h ago) 15d
kube-apiserver-kmaster 1/1 Running 68 (2d19h ago) 47d
kube-controller-manager-kmaster 1/1 Running 33 (21h ago) 47d
kube-flannel-ds-nvpgz 1/1 Running 11 (2d19h ago) 47d
kube-flannel-ds-xnvvw 1/1 Running 11 (2d19h ago) 47d
kube-flannel-ds-ztgql 1/1 Running 4 (2d19h ago) 47d
kube-proxy-h2t7s 1/1 Running 4 (2d19h ago) 47d
kube-proxy-pq9t4 1/1 Running 10 (2d19h ago) 47d
kube-proxy-vgcw7 1/1 Running 8 (2d19h ago) 47d
kube-scheduler-kmaster 1/1 Running 0 21h
my-scheduler-kmaster 1/1 Running 0 7h17m
Then created a pod using that scheduler:
root#kmaster:~# cat my_pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: webapp-color
name: webapp-color
spec:
containers:
- image: 10.74.46.13:5000/webapp-color:v1
name: webapp-color
schedulerName: my-scheduler
root#kmaster:~#
But my pod is in pending state:
root#kmaster:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
webapp-color 0/1 Pending 0 4h12m
root#kmaster:~#
Describe the pod:
root#kmaster:~# kubectl describe pod webapp-color
Name: webapp-color
Namespace: default
Priority: 0
Node: <none>
Labels: run=webapp-color
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
webapp-color:
Image: 10.74.46.13:5000/webapp-color:v1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mkh4z (ro)
Volumes:
kube-api-access-mkh4z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root#kmaster:~#
Kubernetes version:
root#kmaster:~# kubectl version --short
Client Version: v1.22.0
Server Version: v1.22.0
root#kmaster:~#
Please help me to find why pod is in pending state.
Not getting any clue.
Pending means custom scheduler is not working. But not getting any reason.
Regards
The job of the scheduler is to decide what node a pod is going to run on. Until the scheduler decides this, the pod will be in pending state. It seems that your scheduler is not working properly.
Try looking into the logs with: kubectl logs kube-scheduler-kmaster -n kube-system
To see if the scheduler is making any decisions.

How to identify unhealthy pods in a statefulset

I have a StatefulSet with 6 replicas.
All of a sudden StatefulSet thinks there are 5 ready replicas out if 6. When I look at the pod status all 6 pods are ready with all the readiness checks passed 1/1.
Now I am trying to find logs or status that shows which pod is unhealthy as per the StatefulSet, so I could debug further.
Where can I find information or logs for the StatefulSet that could tell me which pod is unhealthy? I have already checked the output of describe pods and describe statefulset but none of them show which pod is unhealthy.
So lets say you created next statefulset:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
user: anurag
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
user: anurag # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 6 # by default is 1
template:
metadata:
labels:
user: anurag # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Result is:
kubectl get StatefulSet web -o wide
NAME READY AGE CONTAINERS IMAGES
web 6/6 8m31s nginx k8s.gcr.io/nginx-slim:0.8
What we can also check StatefulSet's status in:
kubectl get statefulset web -o yaml
status:
collisionCount: 0
currentReplicas: 6
currentRevision: web-599978b754
observedGeneration: 1
readyReplicas: 6
replicas: 6
updateRevision: web-599978b754
updatedReplicas: 6
As per Debugging a StatefulSet, you can list all the pods which belong to a current StatefulSet using labels.
$ kubectl get pods -l user=anurag
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 13m
web-1 1/1 Running 0 12m
web-2 1/1 Running 0 12m
web-3 1/1 Running 0 12m
web-4 1/1 Running 0 12m
web-5 1/1 Running 0 11m
Here, at this point, if any of your pods aren't available- you will definitely see that. And next debugging is Debug Pods and ReplicationControllers including checks if you have enough sufficient resources to start all these pods and etc etc.
Describe problematic pod (kubectl describe pod web-0) should give you an answer why that happened in the very end in Events section.
For example, if you will use origin yaml as it is for this example from statefulset components - you will have an error and any of your pods will up and running. (The reason is storageClassName: "my-storage-class" )
The exact error and understanding what is happening comes from describing problematic pod... that's how it works.
kubectl describe pod web-0
vents:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31s (x2 over 31s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

How to get logs of deployment from Kubernetes?

I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 163d
kubernetes-dashboard 1/1 1 1 164d
monitoring-grafana 0/1 0 0 12m
monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system
Name: monitoring-influxdb
Namespace: kube-system
CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800
Labels: k8s-app=influxdb
task=monitoring
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"...
Selector: k8s-app=influxdb,task=monitoring
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=influxdb
task=monitoring
Containers:
influxdb:
Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from influxdb-storage (rw)
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb
error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-569fd64d84-5q5pj 1/1 Running 0 46h
kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h
traefik-ingress-controller-hx4xd 1/1 Running 0 11h
kubectl logs deployment/<name-of-deployment> # logs of deployment
kubectl logs -f deployment/<name-of-deployment> # follow logs
You can try kubectl describe deploy monitoring-influxdb to get some high-level view of the deployment, maybe some information here.
For more detailed logs, first get the pods: kubectl get po
Then, request the pod logs: kubectl logs <pod-name>
Adding references of two great tools that might help you view cluster logs:
If you wish to view logs from your terminal without using a "heavy" 3rd party logging solution I would consider using K9S which is a great CLI tool that help you get control over your cluster.
If you are not bound only to the CLI and still want run locally I would recommend on Lens.