Minikube got stuck when creating container - kubernetes

I recently got started to learn Kubernetes by using Minikube locally in my Mac. Previously, I was able to start a local Kubernetes cluster with Minikube 0.10.0, created a deployment and viewed Kubernetes dashboard.
Yesterday I tried to delete the cluster and re-did everything from scratch. However, I found I cannot get the assets deployed and cannot view the dashboard. From what I saw, everything seemed to get stuck during container creation.
After I ran minikube start, it reported
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
When I ran kubectl get pods --all-namespaces, it reported (pay attention to the STATUS column):
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 51s
docker ps showed nothing:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
minikube status tells me the VM and cluster are running:
minikubeVM: Running
localkube: Running
If I tried to create a deployment and an autoscaler, I was told they were created successfully:
kubectl create -f configs
deployment "hello-minikube" created
horizontalpodautoscaler "hello-minikube-autoscaler" created
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-661011369-1pgey 0/1 ContainerCreating 0 1m
default hello-minikube-661011369-91iyw 0/1 ContainerCreating 0 1m
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 21m
When exposing the service, it said:
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.32 <nodes> 8080/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 22m
When I tried to access the service, I was told:
curl $(minikube service hello-minikube --url)
Waiting, endpoint for service is not ready yet...
docker ps still showed nothing. It looked to me everything got stuck when creating a container. I tried some other ways to work around this issue:
Upgraded to minikube 0.11.0
Use the xhyve driver instead of the Virtualbox driver
Delete everything cached, like ~/.minikube, ~/.kube, and the cluster, and re-try
None of them worked for me.
Kubernetes is still new to me and I would like to know:
How can I troubleshoot this kind of issue?
What could be the cause of this issue?
Any help is appreciated. Thanks.

It turned out to be a network problem in my case.
The pod status is "ContainerCreating", and I found during container creation, docker image will be pulled from gcr.io, which is inaccessible in China (blocked by GFW). Previous time it worked for me because I happened to connect to it via a VPN.

I didn't try minikube but I use kubernetes. With the information provided it is difficult to say the cause of the issue. Your minikube has no problem in creating resources but ContainerCreating is a problem related to docker daemon or improper communication between kube-api and docker daemon or some problem with kubelet.
You can try the following command:
kubectl describe po POD_NAME
This will give you the POD's events. Maybe this will provide a path to the root cause of issue.
You may also check the logs of kubelet to get the events.

I had this problem on Windows, but it was related to an NTLM proxy. I deleted the minikube VM then recreated it with the correct proxy settings for my CNTLM installation:
minikube start \
--docker-env http_proxy=http://10.0.2.2:3128 \
--docker-env https_proxy=http://10.0.2.2:3128 \
--docker-env no_proxy=localhost,127.0.0.1,::1,192.168.99.100
See https://blog.alexellis.io/minikube-behind-proxy/

The horizontalpodautoscaler (hpa) requires heapster to use. You'll need to run heapster in minikube for that to work. You can always debug these kinds of issues with minikube logs or interactively through the dashboard found at minikube dashboard.
You can find the steps to run heapster and grafana at https://github.com/kubernetes/heapster

For me, it takes several minutes before I see the ContainerCreating problem. After executing the following command:
systemctl status kube-controller-manager.service
I get this error:
Sync "default/redis-master-2229813293" failed with unable to create pods: No API token found for service account "default", retry after the token is automatically created and added to the service account.
There are two ways to solve this:
Set the service account with token
Remove the ServiceAccount setting of KUBE_ADMISSION_CONTROL in api-server

Related

Kubernetes pod can't communicate with other pods in the same node

We are using Kubernetes 1.21.7 , Istio 1.11.4 , Flannel 0.14.0 .
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-d0 Ready control-plane,master 204d v1.21.7
k8s-d1 Ready <none> 204d v1.21.7
k8s-d2 Ready <none> 204d v1.21.7
If pod-a and pod-b are in the same node, for example k8s-d1, they can't communicate (using curl for example). But if I force pods to be in different nodes, they communicate just fine.
This issue only occurs in "istio-system" namespace, but it seems it is not an Istio bug (I already tried opening an issue here , but unsuccessful)
I figured out what was missing:
modprobe br_netfilter
echo "br_netfilter" >> /etc/modules-load.d/modules.conf
At same point, I restarted those nodes and br_netfilter didn't load up automatically. Now that it is written in /etc/modules-load.d/modules.conf , it does apply on boot.
Thank you for your support.

fail to run istio-ingressgateway, got Readiness probe failed: connection refused

I fail to deploy istio and met this problem. When I tried to deploy istio using istioctl install --set profile=default -y. The output is like:
➜ istio-1.11.4 istioctl install --set profile=default -y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway (containers with unready status: [istio-proxy])
- Pruning removed resources Error: failed to install manifests: errors occurred during operation
After running kubectl get pods -n=istio-system, I found the pod of istio-ingressgateway was created, and the result of describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m36s default-scheduler Successfully assigned istio-system/istio-ingressgateway-8dbb57f65-vc85p to k8s-slave
Normal Pulled 4m35s kubelet Container image "docker.io/istio/proxyv2:1.11.4" already present on machine
Normal Created 4m35s kubelet Created container istio-proxy
Normal Started 4m35s kubelet Started container istio-proxy
Warning Unhealthy 3m56s (x22 over 4m34s) kubelet Readiness probe failed: Get "http://10.244.1.4:15021/healthz/ready": dial tcp 10.244.1.4:15021: connect: connection refused
And I can't get the log of this pod:
➜ ~ kubectl logs pods/istio-ingressgateway-8dbb57f65-vc85p -n=istio-system
Error from server: Get "https://192.168.0.154:10250/containerLogs/istio-system/istio-ingressgateway-8dbb57f65-vc85p/istio-proxy": dial tcp 192.168.0.154:10250: i/o timeout
I run all this command on two VM in Huawei Cloud, with a 2C8G master and a 2C4G slave in ubuntu18.04. I have reinstall the environment and the kubernetes cluster, but that doesn't help.
Without ingressgateway
I also tried istioctl install --set profile=minimal -y that only run istiod. But when I try to run httpbin(kubectl apply -f samples/httpbin/httpbin.yaml) with auto injection on, the deployment can't create pod.
➜ istio-1.11.4 kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpbin 0/1 0 0 5m24s
➜ istio-1.11.4 kubectl describe deployment/httpbin
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m6s deployment-controller Scaled up replica set httpbin-74fb669cc6 to 1
When I unlabel the default namespace(kubectl label namespace default istio-injection-), everything works fine.
I hope to deploy istio ingressgateway and run demo like istio-ingressgateway, but I have no idea to solve this situation. Thanks for any help.
I made a silly mistake Orz.
After communiation with my cloud provider, I was informed that there was a network security policy of my cloud server. It's strange that one server has full access and the other has partial access (which only allow for port like 80, 443 and so on). After I change the policy, everything works fine.
For someone who may meet the similar question, I found all these questions seem to come with network problems like dns configuration, k8s configuration or server network problem after hours of searching in google. Like what howardjohn said in this issue, this is not a istio problem.

I cannot load the node information on kubernetes

When I ran the command below, I got the below messages
bistel#BISTelResearchDev-DN03:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
While in the master node, I get the information as below:
bistel#BISTelResearchDev-NN:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bistelresearchdev-dn03 NotReady <none> 62s v1.19.3
bistelresearchdev-nn Ready master 57m v1.19.3
bistel#BISTelResearchDev-NN:/etc/kubernetes$
The bistelresearchdev-dn03 is the worker node and the message appears when I ran any command using kubectl as follows The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I googled it a lot but any trials didn't work for me.
Thanks,
kubectl works only on master node in cluster. If you are getting this error then there is no issue.
I can see the issue here is node is NotReady status for that you can check below things.
Check kubelet is running on node bistelresearchdev-dn03 with systemctl status kubelet
Check network plugin is installed on your cluster.
The first computer you ran on is missing the kube config file.
Normally kubectl expects to find it at
~/.kube/config
If you get the one off the master node and copy it onto your machine your kubectl will see it and be able to use it.

Unable to get kubernetes dashboard

I've installad a new cluster (version 1.13.5 of kubectl kubelet kubeadm), then I've installed flannel and add a worker node.
Now I'm trying to add kubernetes dashboard to my cluster but after i run
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've this situation
kubernetes-dashboard-**** 0/1 CrashLoopBackOff 1 8s
Then if I get the log i can see this
Error while initializing connection to Kubernetes apiserver...
Where I'm wrong?
It seems that the problem was on the worker, when I put the dashboard on master the pod starts.
Maybe the kube dashboard has to be installed on the master or there is something wrong with flannel and the master-node communication.
Check api-server pod is running or not and KubeDNS is working fine or not.

Kubernetes Play Dashboard Access from Outside

I wanted to learn Kubernetes using the Play with Kubernetes site but I seem to encounter some issue.
Here is what I did.
I created my kubernetes cluster by following the steps.
https://labs.play-with-k8s.com/p/bc3a57pk4ckg00bvdk70#bc3a57pk_bc3amn9k4ckg00bvdkv0
I had the following info with 1 master and 2 nodes
[node1 ~]$ kubectl cluster-info
Kubernetes master is running at https://192.168.0.18:6443
Heapster is running at https://192.168.0.18:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.0.18:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-influxdb is running at https://192.168.0.18:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
I then deploy my Dashboard using the following steps.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
[node1 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.98.185.58 <none> 443/TCP 58m
According to this issue https://github.com/play-with-docker/play-with-docker/issues/258
Dashboard port is no longer accessible in the UI
Now, how can I access my dashboard from the outside?
According to the FAQ here..
https://github.com/play-with-docker/play-with-docker
How can I connect to a published port from the outside world?
If you need to access your services from outside, use the following URL pattern http://ip<hyphen-ip>-<session_jd>-<port>.direct.labs.play-with-docker.com (i.e: http://ip-2-135-3-b8ir6vbg5vr00095iil0-8080.direct.labs.play-with-docker.com).
Given my IP address
https://labs.play-with-k8s.com/p/bc3a57pk4ckg00bvdk70#bc3a57pk_bc3amn9k4ckg00bvdkv0
I tried it with this but I am not successful in accessing the dashboard
http://ip-192-168-0-18-bc3a57pk4ckg00bvdk70-8443.direct.labs.play-with-docker.com/
What did I do wrong or what I am missing?
Tried everything in this Running dashboard inside play-with-kubernetes
Nothing is successful
Any hints?
Have you seen this? https://github.com/play-with-docker/play-with-docker/issues/259#issuecomment-387607163
You need to make some changes in the deployment in order to access from outside.