Kubernetes v1.21.2: 'selfLink was empty, can't make reference' - kubernetes

I get this log error for a pod like below but I updated kubernetes orchestrator, clusters, and nodes to kubernetes v1.21.2. Before updating it, they were v1.20.7. I found a reference that from v1.21, selfLink is completely removed. Why am I getting this error? How can I resolve this issue?
error log for kubectl logs (podname)
...
2021-08-10T03:07:19.535Z INFO setup starting manager
2021-08-10T03:07:19.536Z INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
E0810 03:07:19.550636 1 event.go:247] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"controller-leader-election-helper", GenerateName:"", Namespace:"kubestone-system", SelfLink:"", UID:"b01651ed-7d54-4815-a047-57b16d26cfdf", ResourceVersion:"65956", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764161639, loc:(*time.Location)(0x21639e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-10T03:07:19Z\",\"renewTime\":\"2021-08-10T03:07:19Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0000956a0), Fields:(*v1.Fields)(nil)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2 became leader'
2021-08-10T03:07:21.636Z INFO controller-runtime.controller Starting Controller {"controller": "kafkabench"}
...
kubectl get nodes to show kubernetes version: the node that the pod is scheduled is aks-default-41152893-vmss000000
PS C:\Users\user> kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
aks-default-41152893-vmss000000 Ready agent 5h32m v1.21.2
aks-default-41152893-vmss000001 Ready agent 5h29m v1.21.2
aksnpwi000000 Ready agent 5h32m v1.21.2
aksnpwi000001 Ready agent 5h26m v1.21.2
aksnpwi000002 Ready agent 5h19m v1.21.2
kubectl describe pods (pod name: kubestone-controller-manager-f467b7c47-cv7ws)
PS C:\Users\user> kubectl describe pods kubestone-controller-manager-f467b7c47-cv7ws -n kubestone-system
Name: kubestone-controller-manager-f467b7c47-cv7ws
Namespace: kubestone-system
Priority: 0
Node: aks-default-41152893-vmss000000/10.240.0.4
Start Time: Mon, 09 Aug 2021 23:07:16 -0400
Labels: control-plane=controller-manager
pod-template-hash=f467b7c47
Annotations: <none>
Status: Running
IP: 10.240.0.21
IPs:
IP: 10.240.0.21
Controlled By: ReplicaSet/kubestone-controller-manager-f467b7c47
Containers:
manager:
Container ID: containerd://01594df678a2c1d7163c913eff33881edf02e39633b1a4b51dcf5fb769d0bc1e
Image: user2/imagename
Image ID: docker.io/user2/imagename#sha256:aa049f135931192630ceda014d7a24306442582dbeeaa36ede48e6599b6135e1
Port: <none>
Host Port: <none>
Command:
/manager
Args:
--enable-leader-election
State: Running
Started: Mon, 09 Aug 2021 23:07:18 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvjjh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-jvjjh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned kubestone-system/kubestone-controller-manager-f467b7c47-cv7ws to aks-default-41152893-vmss000000
Normal Pulling 23m kubelet Pulling image "user2/imagename"
Normal Pulled 23m kubelet Successfully pulled image "user2/imagename" in 354.899039ms
Normal Created 23m kubelet Created container manager
Normal Started 23m kubelet Started container manager

Kubestone has had no releases since 2019, it needs to upgrade its copy of the Kubernetes Go client. That said, this appears to only impact the event recorder system so probably not a huge deal.

Related

kubernetes pod error: Message /usr/bin/mvn: exec format error

I am trying to create POD in kubernetes inside minikube, but getting Message exec /usr/bin/mvn: exec format error.
Background: so i am using Yaks from citrusframework which is used for BDD testing and supports kubernetes.
I am creating this POD using yaks run helloworld.feature file which creates pod inside which it does testing which we define in .feature file. Yaks based on cucumber.
kubectl describe pod test
Name: test-test-cf93r5qsoe02b4hj9o2g-jhqxf
Namespace: default
Priority: 0
Service Account: yaks-viewer
Node: minikube/192.168.49.2
Start Time: Thu, 26 Jan 2023 08:45:11 +0000
Labels: app=yaks
controller-uid=ff467e84-6ed5-47d7-9b6a-bd10128c589e
job-name=test-test-cf93r5qsoe02b4hj9o2g
yaks.citrusframework.org/test=test
yaks.citrusframework.org/test-id=cf93r5qsoe02b4hj9o2g
Annotations: <none>
Status: Failed
IP : 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: Job/test-test-cf93r5qsoe02b4hj9o2g
Containers:
test:
Container ID: docker://e9017e0e5727d736ddbb6057e804f181d558492db18aa914d2cbc0eaeb3d9ee3
Image: docker.io/citrusframework/yaks:0.12.0
Image ID: docker-pullable://citrusframework/yaks#sha256:3504e26ae47bf5a613d38f641f4eb1b97e0bf72678e67ef9d13f41b74b31a70c
Port: <none>
Host Port: <none>
Command:
mvn
--no-snapshot-updates
-B
-q
--no-transfer-progress
-f
/deployments/data/yaks-runtime-maven
-s
/deployments/artifacts/settings.xml
verify
-Dit.test=org.citrusframework.yaks.feature.Yaks_IT
-Dmaven.repo.local=/deployments/artifacts/m2
State: Terminated
Reason: Error
Message: exec /usr/bin/mvn: exec format error
Exit Code: 1
Started: Thu, 26 Jan 2023 08:45:12 +0000
Finished: Thu, 26 Jan 2023 08:45:12 +0000
Ready: False
Restart Count: 0
Environment:
YAKS_TERMINATION_LOG: /dev/termination-log
YAKS_TESTS_PATH: /etc/yaks/tests
YAKS_SECRETS_PATH: /etc/yaks/secrets
YAKS_NAMESPACE: default
YAKS_CLUSTER_TYPE: KUBERNETES
YAKS_TEST_NAME: test
YAKS_TEST_ID: cf93r5qsoe02b4hj9o2g
Mounts:
/etc/yaks/secrets from secrets (rw)
/etc/yaks/tests from tests (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hn7kv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tests:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: test-test
Optional: false
secrets:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-hn7kv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
here are kubectl get all results:
NAME READY STATUS RESTARTS AGE
pod/hello-minikube 1/1 Running 3 (20h ago) 20h
pod/nginx 0/1 Completed 0 21h
pod/test-hello-world-cf94ttisoe02b4hj9o30-lcdmb 0/1 Error 0 56m
pod/test-test-cf93r5qsoe02b4hj9o2g-jhqxf 0/1 Error 0 131m
pod/yaks-operator-65b956c564-h5tsx 1/1 Running 0 131m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/yaks-operator 1/1 1 1 131m
NAME DESIRED CURRENT READY AGE
replicaset.apps/yaks-operator-65b956c564 1 1 1 131m
NAME COMPLETIONS DURATION AGE
job.batch/test-hello-world-cf94ttisoe02b4hj9o30 0/1 56m 56m
job.batch/test-test-cf93r5qsoe02b4hj9o2g 0/1 131m 131m
I tried to install mvn as error says it is formatting issue. but still its not the requirement to run yaks it sould also work without mvn. and should do his thing on its own
Additionall I found this but i dont think so/know if it will helps out.
kubectl logs deployment.apps/yaks-operator
{"level":"error","ts":1674732026.5891097,"logger":"controller.test-controller","msg":"Reconciler error","name":"test","namespace":"default","error":"invalid character 'e' looking for beginning of value","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/cdeppisc/Projects/Go/pkg/mod/sigs.k8s.io/controller-runtime#v0.11.2/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/Users/cdeppisc/Projects/Go/pkg/mod/sigs.k8s.io/controller-runtime#v0.11.2/pkg/internal/controller/controller.go:227"}

CrashLoopBackOff : Back-off restarting failed container for flask application

I am a beginner in kubernetes and was trying to deploy my flask application following this guide: https://medium.com/analytics-vidhya/build-a-python-flask-app-and-deploy-with-kubernetes-ccc99bbec5dc
I have successfully built a docker image and pushed it to dockerhub https://hub.docker.com/repository/docker/beatrix1997/kubernetes_flask_app
but am having trouble debugging a pod.
This is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetesflaskapp-deploy
labels:
app: kubernetesflaskapp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetesflaskapp
template:
metadata:
labels:
app: kubernetesflaskapp
spec:
containers:
- name: kubernetesflaskapp
image: beatrix1997/kubernetes_flask_app
ports:
- containerPort: 5000
And this is the description of the pod:
Name: kubernetesflaskapp-deploy-5764bbbd44-8696k
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Fri, 20 May 2022 11:26:33 +0100
Labels: app=kubernetesflaskapp
pod-template-hash=5764bbbd44
Annotations: <none>
Status: Running
IP: 172.17.0.12
IPs:
IP: 172.17.0.12
Controlled By: ReplicaSet/kubernetesflaskapp-deploy-5764bbbd44
Containers:
kubernetesflaskapp:
Container ID: docker://d500dc15e389190670a9273fea1d70e6bd6ab2e7053bd2480d114ad6150830f1
Image: beatrix1997/kubernetes_flask_app
Image ID: docker-pullable://beatrix1997/kubernetes_flask_app#sha256:1bfa98229f55b04f32a6b85d72860886abcc0f17295b14e173151a8e4b0f0334
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 20 May 2022 11:58:38 +0100
Finished: Fri, 20 May 2022 11:58:38 +0100
Ready: False
Restart Count: 11
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zq8n7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zq8n7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33m default-scheduler Successfully assigned default/kubernetesflaskapp-deploy-5764bbbd44-8696k to minikube
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 14.783413947s
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.243534487s
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.373217701s
Normal Pulling 32m (x4 over 33m) kubelet Pulling image "beatrix1997/kubernetes_flask_app"
Normal Created 32m (x4 over 33m) kubelet Created container kubernetesflaskapp
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.239794774s
Normal Started 32m (x4 over 33m) kubelet Started container kubernetesflaskapp
Warning BackOff 3m16s (x138 over 33m) kubelet Back-off restarting failed container
I am using ubuntu as my OS if it matters at all.
Any help would be appreciated!
Many thanks!
I would check the following:
Check if your Docker image works in Docker, you can run it with the run command, find the official doc here
If it doesn't work, then you can check what is wrong in your app first.
If it does, try checking the readiness and liveness probe, here the official documentation
You can find more hints about failing pods here
The error can be due to the issue in the application as the reported reason is "Back-off restarting failed container". Please paste the following logs in the question for further clarification
kubectl logs -n <NS> pods <pod-name>

why does the pod remain in pending state despite having toleration set

I applied the following taint, and label to a node but the pod never reaches a running status and I cannot seem to figure out why
kubectl taint node k8s-worker-2 dedicated=devs:NoSchedule
kubectl label node k8s-worker-2 dedicated=devs
and here is a sample of my pod yaml file:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
security: s1
name: pod-1
spec:
containers:
- image: nginx
name: bear
resources: {}
tolerations:
- key: "dedicated"
operator: "Equal"
value: "devs"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated
operator: In
values:
- devs
dnsPolicy: ClusterFirst
restartPolicy: Always
nodeName: k8s-master-2
status: {}
on creating the pod, it gets scheduled on the k8s-worker-2 node but remains in a pending state before it's finally evicted. Here are sample outputs:
kubectl describe no k8s-worker-2 | grep -i taint
Taints: dedicated=devs:NoSchedule
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 0/1 Pending 0 9s <none> k8s-master-2 <none> <none>
# second check
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-1 0/1 Pending 0 59s <none> k8s-master-2 <none> <none>
Name: pod-1
Namespace: default
Priority: 0
Node: k8s-master-2/
Labels: security=s1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
bear:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzvml (ro)
Volumes:
kube-api-access-dzvml:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: dedicated=devs:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Also, here is output of kubectl describe node
root#k8s-master-1:~/scheduling# kubectl describe nodes k8s-worker-2
Name: k8s-worker-2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
dedicated=devs
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-worker-2
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 10.128.0.4/32
projectcalico.org/IPv4IPIPTunnelAddr: 192.168.140.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 18 Jul 2021 16:18:41 +0000
Taints: dedicated=devs:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: k8s-worker-2
AcquireTime: <unset>
RenewTime: Sun, 10 Oct 2021 18:54:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 10 Oct 2021 18:48:50 +0000 Sun, 10 Oct 2021 18:48:50 +0000 CalicoIsUp Calico is running on this node
MemoryPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 10 Oct 2021 18:53:40 +0000 Mon, 04 Oct 2021 07:52:58 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.128.0.4
Hostname: k8s-worker-2
Capacity:
cpu: 2
ephemeral-storage: 20145724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8149492Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 18566299208
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8047092Ki
pods: 110
System Info:
Machine ID: 3c2709a436fa0c630680bac68ad28669
System UUID: 3c2709a4-36fa-0c63-0680-bac68ad28669
Boot ID: 18a3541f-f3b4-4345-ba45-8cfef9fb1364
Kernel Version: 5.8.0-1038-gcp
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.3
Kube-Proxy Version: v1.21.3
PodCIDR: 192.168.2.0/24
PodCIDRs: 192.168.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-gp4tk 250m (12%) 0 (0%) 0 (0%) 0 (0%) 84d
kube-system kube-proxy-6xxgx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 81d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 6m25s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m25s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m19s (x7 over 6m25s) kubelet Node k8s-worker-2 status is now: NodeHasSufficientPID
Warning Rebooted 6m9s kubelet Node k8s-worker-2 has been rebooted, boot id: 18a3541f-f3b4-4345-ba45-8cfef9fb1364
Normal Starting 6m7s kube-proxy Starting kube-proxy.
Included the following to show that the pod never issues events and it terminates later on by itself.
root#k8s-master-1:~/format/scheduling# kubectl get po
No resources found in default namespace.
root#k8s-master-1:~/format/scheduling# kubectl create -f nginx.yaml
pod/pod-1 created
root#k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 10s
root#k8s-master-1:~/format/scheduling# kubectl describe po pod-1
Name: pod-1
Namespace: default
Priority: 0
Node: k8s-master-2/
Labels: security=s1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
bear:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5hsq4 (ro)
Volumes:
kube-api-access-5hsq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: dedicated=devs:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root#k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 45s
root#k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 62s
root#k8s-master-1:~/format/scheduling# kubectl get po pod-1
NAME READY STATUS RESTARTS AGE
pod-1 0/1 Pending 0 74s
root#k8s-master-1:~/format/scheduling# kubectl get po pod-1
Error from server (NotFound): pods "pod-1" not found
root#k8s-master-1:~/format/scheduling# kubectl get po
No resources found in default namespace.
root#k8s-master-1:~/format/scheduling#
I was able to figure this one out later. On reproducing the same case on another cluster, the pod got created on the node having the scheduling parameters set. Then it occurred to me that the only change I had to make on the manifest was setting nodeName: node-1 to match the right node on other cluster.
I was literally assigning the pod to a control plane node nodeName: k8s-master-2 and this was causing conflicts.
on creating the pod, it gets scheduled on the k8s-worker-2 node but
remains in a pending state before it's finally evicted.
Hope you node have proper resource left and free, that could be also reason behind pod getting evicted due to resources issue.
https://sysdig.com/blog/kubernetes-pod-evicted/

A question about pod running on the kubernetes(k8s) platform:The pods are running but the containers are not-ready

I build a k8s cluster on my virtual Machines(CentOS/7) with Virtual Box:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 8d v1.21.2 192.168.0.186 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker01 Ready <none> 8d v1.21.2 192.168.0.187 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker02 Ready <none> 8d v1.21.2 192.168.0.188 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
And i run some pods on the default namespace with a ReplicaSet several days before.
They were all worked fine at first, and then I shut down the VM.
Today, after I restarted the VMs, I found that they are not working properly anymore:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dnsutils 1/1 Running 3 5d13h
pod/kubapp-6qbfz 0/1 Running 0 5d13h
pod/kubapp-d887h 0/1 Running 0 5d13h
pod/kubapp-z6nw7 0/1 Running 0 5d13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubapp 3 3 0 5d13h
Then I delete the ReplicaSet and re-create it to create the pods.
And i run the command to get more infomations:
[root#k8s-master ch04]# kubectl describe po kubapp-z887v
Name: kubapp-d887h
Namespace: default
Priority: 0
Node: k8s-worker02/192.168.0.188
Start Time: Fri, 23 Jul 2021 15:55:16 +0000
Labels: app=kubapp
Annotations: cni.projectcalico.org/podIP: 10.244.69.244/32
cni.projectcalico.org/podIPs: 10.244.69.244/32
Status: Running
IP: 10.244.69.244
IPs:
IP: 10.244.69.244
Controlled By: ReplicaSet/kubapp
Containers:
kubapp:
Container ID: docker://fc352ce4c6a826f2cf108f9bb9a335e3572509fd5ae2002c116e2b080df5ee10
Image: evalle/kubapp
Image ID: docker-pullable://evalle/kubapp#sha256:560c9c50b1d894cf79ac472a9925dc795b116b9481ec40d142b928a0e3995f4c
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Jul 2021 15:55:21 +0000
Ready: False
Restart Count: 0
Readiness: exec [ls /var/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9rwr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m9rwr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/kubapp-d887h to k8s-worker02
Normal Pulling 30m kubelet Pulling image "evalle/kubapp"
Normal Pulled 30m kubelet Successfully pulled image "evalle/kubapp" in 4.049160061s
Normal Created 30m kubelet Created container kubapp
Normal Started 30m kubelet Started container kubapp
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
I don`t know what it happens and how i should do for fix it.
SO here i am and ask to you guys for help.
I am a k8s newbie,just give a hand please.
Thanks for paul-becotte`s help and recommendation.I think i should to post the definition of the pod:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
# here is the name of the replication controller (RC)
name: kubapp
spec:
replicas: 3
# what pods the RC is operating on
selector:
matchLabels:
app: kubapp
# the pod template for creating new pods
template:
metadata:
labels:
app: kubapp
spec:
containers:
- name: kubapp
image: evalle/kubapp
readinessProbe:
exec:
command:
- ls
- /var/ready
There is a example definition of yaml from https://github.com/Evalle/k8s-in-action/blob/master/Chapter_4/kubapp-rs.yaml.
I don`t know where to find the dockerfile of the image evalle/kubapp.
And I don't know if it has the /var/ready directory.
Look at your event
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
Your readiness probe is failing- looks like it is checking for the existence of a file at /var/ready.
Your next step is "does that make sense? Is my container going to actually write a file at /var/ready when its ready?" If so, you'll want to look at the logs from your pod and figure out why its not writing the file. If its NOT the correct check, look at the yaml you used to create your pod/deployment/replicaset whatever and replace that check with something that does make sense.

How do you install mayastor for openebs with microk8s to use as PV/SC?

I have a 3 node microk8s cluster running on virtualbox Ubuntu vms. And I am trying to get mayastor for openebs working to use with PVCs. I have followed the steps in this guide:
https://mayastor.gitbook.io/introduction/quickstart/preparing-the-cluster
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor
https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor
An example of my MayastorPool from step 3 looks like this:
apiVersion: "openebs.io/v1alpha1"
kind: MayastorPool
metadata:
name: pool-on-node1-n2
namespace: mayastor
spec:
node: node1
disks: [ "/dev/nvme0n2" ]
And my StorageClass looks like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
provisioner: io.openebs.csi-mayastor
parameters:
repl: '3'
protocol: 'nvmf'
ioTimeout: '60'
local: 'true'
volumeBindingMode: WaitForFirstConsumer
All the checks seem fine according to guide, but when I try to creating a PVC and using it according to this https://mayastor.gitbook.io/introduction/quickstart/deploy-a-test-application the the test application fio pod doesn't come up. When I look at it with describe I see the following:
$ kubectl describe pods fio -n mayastor
Name: fio
Namespace: mayastor
Priority: 0
Node: node2/192.168.40.12
Start Time: Wed, 02 Jun 2021 22:56:03 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
fio:
Container ID:
Image: nixery.dev/shell/fio
Image ID:
Port: <none>
Host Port: <none>
Args:
sleep
1000000
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6cdf (ro)
/volume from ms-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ms-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ms-volume-claim
ReadOnly: false
kube-api-access-l6cdf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: openebs.io/engine=mayastor
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44m default-scheduler Successfully assigned mayastor/fio to node2
Normal SuccessfulAttachVolume 44m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199"
Warning FailedMount 24m (x4 over 40m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[kube-api-access-l6cdf ms-volume]: timed out waiting for the condition
Warning FailedMount 13m (x23 over 44m) kubelet MountVolume.SetUp failed for volume "pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199" : rpc error: code = Internal desc = Failed to find parent dir for mountpoint /var/snap/microk8s/common/var/lib/kubelet/pods/b1166af6-1ade-4a3a-9b1d-653151418695/volumes/kubernetes.io~csi/pvc-ec6ce101-fb3e-4a5a-8d61-1d228f8f8199/mount, volume ec6ce101-fb3e-4a5a-8d61-1d228f8f8199
Warning FailedMount 4m3s (x13 over 42m) kubelet Unable to attach or mount volumes: unmounted volumes=[ms-volume], unattached volumes=[ms-volume kube-api-access-l6cdf]: timed out waiting for the condition
Any ideas where to look or what to do to get mayastor working with microk8s? Happy to post more information.
Thanks to Kiran Mova's comments and Niladri from the openebs slack channel:
Replace the step:
https://mayastor.gitbook.io/introduction/quickstart/deploy-mayastor#csi-node-plugin
kubectl apply -f https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml
with
curl -fSs https://raw.githubusercontent.com/openebs/Mayastor/master/deploy/csi-daemonset.yaml | sed "s|/var/lib/kubelet|/var/snap/microk8s/common/var/lib/kubelet|g" - | kubectl apply -f -
So replace the path with the microk8s installation specific path. Even though there is a symlink things don't seem to work out right without this change.