After uninstalling calico, new pods are stuck in container creating state - kubernetes

After uninstalling calico, kubectl -f calico.yaml, not able to create new pods in the cluster. Any new pods in the cluster are stuck in container creating state. Kubectl describe shows the errors below:
Warning FailedCreatePodSandBox 2m kubelet, 10.0.12.2 Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "f15743177fd70c5eabf70c60be5b8b354e5346837d1b5d59bf99d1d1b5d6416c" network for pod "test-9465-768b57b5df-fv9d4": NetworkPlugin cni failed to set up pod "test-9465-768b57b5df-fv9d4_policy-demo" network: error getting ClusterInformation: connection is unauthorized: Unauthorized, failed to clean up sandbox container "f15743177fd70c5eabf70c60be5b8b354e5346837d1b5d59bf99d1d1b5d6416c" network for pod "test-9465-768b57b5df-fv9d4": NetworkPlugin cni failed to teardown pod "test-9465-768b57b5df-fv9d4_policy-demo" network: error getting ClusterInformation: connection is unauthorized: Unauthorized]

The main issue is caused because calico has an init container but does not have a cleanup container. T
To undeploy calico, we have to do the usual kubectl delete -f <yaml>, and then delete a calico conf file in each of the nodes /etc/cni/net.d/. This configuration file along with other binaries are loaded on to the host by the init container.
https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
From this link we can see that kubelet reads the configuration file from the default directory, and if there are multiple configuration files, then it applies the CNI plugin from the config file that appears first in an alphabetical order (why, oh god why??).
So, in our case, after uninstalling calico, it would be removed from all the admin privileges but the nodes would still try to apply calico rules based upon the config file it picked up from the default directory. Then restart the node to get rid of the iptable rules.
Removing the file and restarting the node solves the issue and we get back to normal behavior. Another way to solve the same problem is by simply terminating the node from the cluster if you are on a managed kubernetes cluster. Since, public cloud infrastructure automatically boots up another node to keep the same state, it no longer has the calico configuration file.

Related

OKD unable to pull lager images from Internal Registry right after deployment of microservices through Jenkinsx

I am trying to deploy micro services in OKD through Jenkinsx and the deployment is successful every time.
But the Pods are going into "ImagePullBackOff" error right after deployment and comes into Running state after deleting the pods.
ImagePullBackOff Error:
Events:
The images are being pulled from the OKD's internal registry and the image is of size "1.25 GB". And the images are available in the Internal Registry when the pod is trying to pull it.
I came across "image-pull-progress-deadline" field to be updated in the "/etc/origin/node/node-config.yaml" in all the nodes. Updated the same in all the nodes but still facing the same "ImagePullBackOff" error.
I am trying to restart the kubelet service but that fails with kubelet.service not found error,
[master ~]$ sudo systemctl status kubelet
Unit kubelet.service could not be found.
Please let me know if restart of kubelet service is necessary and any suggestions to resolve the "ImagePullBackOff" issue.

fluentd daemon set container for papertrail failing to start in kubernetes cluster

Am trying to setup fluentd in kubernetes cluster to aggregate logs in papertrail, as per the documentation provided here.
The configuration file is fluentd-daemonset-papertrail.yaml
It basically creates a daemon set for fluentd container and a config map for fluentd configuration.
When I apply the configuration, the pod is assigned to a node and the container is created. However, its either not completing the initialization or the pod gets killed immediately after it is started.
As the pods are getting killed, am loosing the logs too. Couldn't investigate the cause of the issue.
Looking through the events for kube-system namespace has below errors,
Error: failed to start container "fluentd": Error response from daemon: OCI runtime create failed: container_linux.go:338: creating new parent process caused "container_linux.go:1897: running lstat on namespace path \"/proc/75026/ns/ipc\" caused \"lstat /proc/75026/ns/ipc: no such file or directory\"": unknown
Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9559643bf77e29d270c23bddbb17a9480ff126b0b6be10ba480b558a0733161c" network for pod "fluentd-papertrail-b9t5b": NetworkPlugin kubenet failed to set up pod "fluentd-papertrail-b9t5b_kube-system" network: Error adding container to network: failed to open netns "/proc/111610/ns/net": failed to Statfs "/proc/111610/ns/net": no such file or directory
Am not sure whats causing these errors. Appreciate any help to understand and troubleshoot these errors.
Also, is it possible to look at logs/events that could tell us why a pod is given a terminate signal?
Please ensure that /etc/cni/net.d and its /opt/cni/bin friend both exist and are correctly populated with the CNI configuration files and binaries on all Nodes.
Take a look: sandbox.
With help from papertrail support team, I was able to resolve the issue by removing below entry from manifest file.
kubernetes.io/cluster-service: "true"
Above annotation seems to have been deprecated.
Relevant github issues:
https://github.com/fluent/fluentd-kubernetes-daemonset/issues/296
https://github.com/kubernetes/kubernetes/issues/72757

coredns pods getting failed created by kubeadm init command

When I run kubeadm::init command, all pods are running except coredns pods. when I describe the pods, its showing something cni initialization failed.
do I need any network plugin to be installed before running kubeadm::init??
No, the network add-on is only added after kubeadm init, the documentation is explicit on this topic: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

All Kubernetes Pods go down simultaneously periodically

I've been running a Kubernetes cluster for a while now, but I haven't been able to keep it stable.
My cluster consists of four nodes, two masters and two workers. All nodes run on the same physical server, which in turn runs VMware vSphere 6.5. Each node runs CoreOS stable (1353.7.0), and I'm running Kubernetes/Hyperkube v1.6.4, using Calico for networking. I've followed the steps in this guide.
What happens is that for a few hours/days, the cluster will run without a hitch. Then, all of a sudden (for no discernible reason as far as I can tell) all my pods go to status "Pending" and stay that way. Any hosted services are then no longer reachable.
After a while (usually 5 to 10 minutes), it seems to restore itself, after which it starts recreating all my pods, and trying (but failing) to shut down all my running pods. Some of the newly created pods come up, but will initially have no connection to the internet.
For a couple of weeks now I've had this issue intermittently, and it's been preventing me from using Kubernetes in production. I'd really like to figure out what's been causing this!
Weirdly enough, when I try to diagnose the problem by inspecting the logs,
I've noticed that on both of my worker nodes, the journald logs will have become corrupted! On the master nodes, the log is still readable, but not very informative.
Even when running, kubelet is constantly emitting errors in its logs. On all the nodes, this is what's posted about once a minute:
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.012890 24228 cni.go:275] Error deleting network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014762 24228 remote_runtime.go:109] StopPodSandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:37:14 kube-master1 kubelet-wrapper[24228]: E0526 09:37:14.014818 24228 kuberuntime_gc.go:138] Failed to stop sandbox "3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "logstash-s3498_default" network: open /var/lib/cni/flannel/3975179a14dac15cd41881266c9bfd6b8763c0a48934147582cb55d5618a9233: no such file or directory
May 26 09:38:07 kube-master1 kubelet-wrapper[24228]: I0526 09:38:07.422341 24228 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/9a378211-3597-11e7-a7ec-000c2958a0d7-default-token-0p3gf" (spec.Name: "default-token-0p3gf") pod "9a378211-3597-11e7-a7ec-000c2958a0d7" (UID: "9a378211-3597-11e7-a7ec-000c2958a0d7").
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: W0526 09:38:14.037553 24228 docker_sandbox.go:263] NetworkPlugin cni failed on the status hook for pod "logstash-s3498_default": Unexpected command output nsenter: cannot open : No such file or directory
May 26 09:38:14 kube-master1 kubelet-wrapper[24228]: with error: exit status 1
I've googled this error, encountered this issue, but that has been closed and people indicate that using v1.6.0 or later should resolve it, but it definitely hasn't in my case...
Can anybody point me in the right direction?!
Thanks!
Seen this as well. problem seems to go away if you downgrade CoreOS to a older version with docker 1.12.3.
Docker is a nightmare with regressions in every single version they release :(

Kubernetes cluster down error

When running following command for cluster down in Kubernetes, I am getting following error:
KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
rm: cannot remove ‘/var/lib/kubelet/pods/16981b98-a3bb-11e5-99fb-00505622b20d/volumes/kubernetes.io~secret/default-token-0i2n6’: Device or resource busy
I tried to remove it forcefully but then also its not getting removed.
This isn't taking account pods that are Terminating, nor pods in namespaces other than the default namespace. Filed an issue:
https://github.com/kubernetes/kubernetes/issues/20469