We are using Kubernetes 1.21.7 , Istio 1.11.4 , Flannel 0.14.0 .
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-d0 Ready control-plane,master 204d v1.21.7
k8s-d1 Ready <none> 204d v1.21.7
k8s-d2 Ready <none> 204d v1.21.7
If pod-a and pod-b are in the same node, for example k8s-d1, they can't communicate (using curl for example). But if I force pods to be in different nodes, they communicate just fine.
This issue only occurs in "istio-system" namespace, but it seems it is not an Istio bug (I already tried opening an issue here , but unsuccessful)
I figured out what was missing:
modprobe br_netfilter
echo "br_netfilter" >> /etc/modules-load.d/modules.conf
At same point, I restarted those nodes and br_netfilter didn't load up automatically. Now that it is written in /etc/modules-load.d/modules.conf , it does apply on boot.
Thank you for your support.
Related
everyone.
Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.
I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-istio-test-01-pool-01-030fc539-c6xd Ready <none> 3m13s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k Ready <none> 3m18s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685 Ready <none> 3m18s v1.13.11-gke.14
$
Off course, I can get worker nodes information. This information is same with GKE web console.
By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 262d v1.14.1
node01 Ready <none> 140d v1.14.1
node02 Ready <none> 140d v1.14.1
$
This result includes master node information.
I'm curious why I cannot get the master node information in full-managed kubernetes cluster.
I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed.
I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.
At the least, I'm just learning English now. Please correct me if I'm wrong.
It's a good question!
The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.
The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.
https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes
Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.
I've installad a new cluster (version 1.13.5 of kubectl kubelet kubeadm), then I've installed flannel and add a worker node.
Now I'm trying to add kubernetes dashboard to my cluster but after i run
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
I've this situation
kubernetes-dashboard-**** 0/1 CrashLoopBackOff 1 8s
Then if I get the log i can see this
Error while initializing connection to Kubernetes apiserver...
Where I'm wrong?
It seems that the problem was on the worker, when I put the dashboard on master the pod starts.
Maybe the kube dashboard has to be installed on the master or there is something wrong with flannel and the master-node communication.
Check api-server pod is running or not and KubeDNS is working fine or not.
I have a kubernetes master and node setup in two centos VMs on my Win 10.
I used flannel for CNI and deployed ambassador as an API gateway.
As the ambassador routes did not work, I analysed further to understand that the DNS (ip-10.96.0.10) is not accessible from busybox pod which means that none of the service names can be accessed. Could I get any suggestion please.
1. You should use newest version of Flannel.
Flannel does not setup service IPs but kube-proxy does, you should look at kube-proxy on your nodes and ensure they are not reporting errors.
I'd suggest taking a look at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tabs-pod-install-4 and ensure you have met the requirements stated there.
Similar issue but with Calico plugin you can find here: https://github.com/projectcalico/calico/issues/1798
2. Check if you have open port 8285, flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts.
3. Ambassador includes an integrated diagnostics service to help with troubleshooting, this may be useful for you. By default, this is not exposed to the Internet. To view it, we'll need to get the name of one of the Ambassador pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-3655608000-43x86 1/1 Running 0 2m
ambassador-3655608000-w63zf 1/1 Running 0 2m
Forwarding local port 8877 to one of the pods:
kubectl port-forward ambassador-3655608000-43x86 8877
will then let us view the diagnostics at http://localhost:8877/ambassador/v0/diag/.
First spot should solve your problem, if not, try remainings.
I hope this helps.
Question
What are the commands to start/stop the K8S cluster? After installation is done following Using kubeadm to Create a Cluster, restarted the CentOS server and the K8S cluster is not running after restart.
There are services mentioned in Fedora (Single Node) listing services but there are no such services installed via kubeadm.
Failed to restart etcd.service: Unit not found.
Failed to restart kube-apiserver.service: Unit not found.
Failed to restart kube-controller-manager.service: Unit not found.
Environment
CentOS 7 on Virtual Box. K8S 1.8.5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 36m v1.8.5
node01 Ready <none> 35m v1.8.5
node02 Ready <none> 35m v1.8.5
As you are using kubeadm to initiate and administrate the k8s cluster.As I understand kubeadm use following approach
Systemd manage only kubelet service on the node.
Kubelet create and manage k8s control plane componenets (kube-api server, kube-controller-manager , etcd and scheduler, kube-proxy) as a static pod.
Kubelet access their json manifest files from /etc/kubernetes/manifests.
So if you want to remove control plane components you just need to move these manifest files in another directory.
I recently got started to learn Kubernetes by using Minikube locally in my Mac. Previously, I was able to start a local Kubernetes cluster with Minikube 0.10.0, created a deployment and viewed Kubernetes dashboard.
Yesterday I tried to delete the cluster and re-did everything from scratch. However, I found I cannot get the assets deployed and cannot view the dashboard. From what I saw, everything seemed to get stuck during container creation.
After I ran minikube start, it reported
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
When I ran kubectl get pods --all-namespaces, it reported (pay attention to the STATUS column):
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 51s
docker ps showed nothing:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
minikube status tells me the VM and cluster are running:
minikubeVM: Running
localkube: Running
If I tried to create a deployment and an autoscaler, I was told they were created successfully:
kubectl create -f configs
deployment "hello-minikube" created
horizontalpodautoscaler "hello-minikube-autoscaler" created
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-minikube-661011369-1pgey 0/1 ContainerCreating 0 1m
default hello-minikube-661011369-91iyw 0/1 ContainerCreating 0 1m
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 21m
When exposing the service, it said:
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube 10.0.0.32 <nodes> 8080/TCP 6s
kubernetes 10.0.0.1 <none> 443/TCP 22m
When I tried to access the service, I was told:
curl $(minikube service hello-minikube --url)
Waiting, endpoint for service is not ready yet...
docker ps still showed nothing. It looked to me everything got stuck when creating a container. I tried some other ways to work around this issue:
Upgraded to minikube 0.11.0
Use the xhyve driver instead of the Virtualbox driver
Delete everything cached, like ~/.minikube, ~/.kube, and the cluster, and re-try
None of them worked for me.
Kubernetes is still new to me and I would like to know:
How can I troubleshoot this kind of issue?
What could be the cause of this issue?
Any help is appreciated. Thanks.
It turned out to be a network problem in my case.
The pod status is "ContainerCreating", and I found during container creation, docker image will be pulled from gcr.io, which is inaccessible in China (blocked by GFW). Previous time it worked for me because I happened to connect to it via a VPN.
I didn't try minikube but I use kubernetes. With the information provided it is difficult to say the cause of the issue. Your minikube has no problem in creating resources but ContainerCreating is a problem related to docker daemon or improper communication between kube-api and docker daemon or some problem with kubelet.
You can try the following command:
kubectl describe po POD_NAME
This will give you the POD's events. Maybe this will provide a path to the root cause of issue.
You may also check the logs of kubelet to get the events.
I had this problem on Windows, but it was related to an NTLM proxy. I deleted the minikube VM then recreated it with the correct proxy settings for my CNTLM installation:
minikube start \
--docker-env http_proxy=http://10.0.2.2:3128 \
--docker-env https_proxy=http://10.0.2.2:3128 \
--docker-env no_proxy=localhost,127.0.0.1,::1,192.168.99.100
See https://blog.alexellis.io/minikube-behind-proxy/
The horizontalpodautoscaler (hpa) requires heapster to use. You'll need to run heapster in minikube for that to work. You can always debug these kinds of issues with minikube logs or interactively through the dashboard found at minikube dashboard.
You can find the steps to run heapster and grafana at https://github.com/kubernetes/heapster
For me, it takes several minutes before I see the ContainerCreating problem. After executing the following command:
systemctl status kube-controller-manager.service
I get this error:
Sync "default/redis-master-2229813293" failed with unable to create pods: No API token found for service account "default", retry after the token is automatically created and added to the service account.
There are two ways to solve this:
Set the service account with token
Remove the ServiceAccount setting of KUBE_ADMISSION_CONTROL in api-server