Why Kubernetes cluster on VM instances Google Cloud can't connect host with NodePort? - kubernetes

I have installed a kubernetes cluster using this tutorial.
When I've set it up on VM Virtual Box - my host can connect with NodePort normally. When I've tried it on Compute Engine Virtual Machine instance, the Kubernetes cluster can't connect host with NodePort?
I have attached two pictures.
Thank you for your support.
Kubernetes cluster (bare metal) on Local VM Virtual Box
Kubernetes cluster (bare metal) on Google cloud Platform VM Instances

This took me a while to test but I finally have a result. So it turns out the reason of your issue is Calico and GCP firewall. To be more specific you have to add firewall rules before you can be successful with the connectivity.
Following this document on installing Calico for GCE:
GCE blocks traffic between hosts by default; run the following command
to allow Calico traffic to flow between containers on different hosts
(where the source-ranges parameter assumes you have created your
project with the default GCE network parameters - modify the address
range if yours is different):
So you need to allow the traffic to flow between containers:
gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9"
Note that this IP should be changed. You can use for test purposes 10.0.0.0/8 but this is way to wide range so please narrow it down to your needs.
Then proceed with setting up instances for master and nodes.
You can actually skip most of the steps from the tutorial you posted, as connectivity is resolved by cloud provider. Here is a really simple script I use for Kubeadm on VM's. You can also perform this step by step.
#!/bin/bash
swapoff -a
echo net/bridge/bridge-nf-call-ip6tables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-iptables = 1 >> /etc/ufw/sysctl.conf
echo net/bridge/bridge-nf-call-arptables = 1 >> /etc/ufw/sysctl.conf
apt-get install -y ebtables ethtool
apt-get update
apt-get install -y docker.io
apt-get install -y apt-transport-https
apt-get install -y curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
In my case I used simple Redis application from Kubernetes documentation
root#calico-master:/home/xxx# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
redis-master ClusterIP 10.107.41.117 <none> 6379/TCP 26m
root#calico-master:/home/xxx# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
redis-master-57fc67768d-5lx92 1/1 Running 0 27m 192.168.1.4 calico <none>
root#calico-master:/home/xxx# ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=63 time=1.48 ms
Before the firewall rules and regular Calico installation I was not able to ping, nor wgetfrom the service after that there is no problem with pinging the IP or hostname and also wget works:
> root#calico-master:/home/xxx# wget http://10.107.41.117:6379
> --2018-10-24 13:24:43-- http://10.107.41.117:6379/ Connecting to 10.107.41.117:6379... connected. HTTP request sent, awaiting response... 200 No headers, assuming HTTP/0.9 Length: unspecified
> Saving to: ‘index.html.2’
Steps above were also tested for type: NodePort and it works as well.
Another way is to use Flannel which I also tested and it worked out of the box for the needs of testing your issue. Be sure to read more about CNI’s so you can choose one that will suit your needs.
Hope this solves your problem.

That's because in minikube there's only one node for everything and the VM is that node. So if you are in the VM you can connect to the NodePort locally or on localhost.
In the case of GCP, you don't usually run work the master(s) and nodes on the same VMs. So you need to will get a reply from one of the nodes (VMs) where your pod is listening.
To get the list of nodes on your cluster you can simply run:
kubectl get nodes -o=wide
You should see for example an Internal IP for your nodes. Then you can try
curl http://<Internal IP>:<NodePort>

You can also get the IP details using,
kubectl describe nodes
Then ping trying to connect to your host using the corresponding IP.

Related

Unable to connect internet/google.com from pod. Docker and k8 are able to pull images

I am trying to learn Kubernetes.
Create a single-node Kubernetes Cluster on Oracle Cloud using these steps here
cat /etc/resolv.conf
>> nameserver 169.254.169.254
kubectl run busybox --rm -it --image=busybox --restart=Never -- sh
cat /etc/resolv.conf
>> nameserver 10.33.0.10
nslookup google.com
>>Server: 10.33.0.10
Address: 10.33.0.10:53
;; connection timed out; no servers could be reached
ping 10.33.0.10
>>PING 10.33.0.10 (10.33.0.10): 56 data bytes
kubectl get svc -n kube-system -o wide
>> CLUSTER-IP - 10.33.0.10
kubectl logs --namespace=kube-system -l k8s-app=kube-dns
>>[ERROR] plugin/errors: 2 google.com. A: read udp 10.32.0.9:57385->169.254.169.254:53: i/o timeout
Not able to identify if this is an error of coredns or pod networking. Any direction would really help
Kubernetes has deprecated Docker as a container runtime after v1.20.
Kubernetes Development decision to deprecate Docker as an underlying runtime in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes.
To support this Mirantis and Docker came to the rescue by agreeing to partner in the maintenance of the shim code standalone.
More details here here
sudo systemctl enable docker
# -- Installin cri-dockerd
VER=$(curl -s https://api.github.com/repos/Mirantis/cri-dockerd/releases/latest|grep tag_name | cut -d '"' -f 4)
echo $VER
wget https://github.com/Mirantis/cri-dockerd/releases/download/${VER}/cri-dockerd-${VER}-linux-arm64.tar.gz
tar xvf cri-dockerd-${VER}-linux-arm64.tar.gz
install -o root -g root -m 0755 cri-dockerd /usr/bin/cri-dockerd
cp cri-dockerd /usr/bin/
# -- Verification
cri-dockerd --version
# -- Configure systemd units for cri-dockerd
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo cp cri-docker.socket cri-docker.service /etc/systemd/system/
sudo cp cri-docker.socket cri-docker.service /usr/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable cri-docker.service
sudo systemctl enable --now cri-docker.socket
# -- Using cri-dockerd on new Kubernetes cluster
systemctl status docker | grep Active
I ran into similar issue with almost same scenario described above. The accepted solution https://stackoverflow.com/a/72104194/1119570 is wrong. This issue is a pure networking issue that is not related to any of EKS upgrade in any way.
The root cause for our issue was the fact that the Worker Node AWS EKS Linux 1.21 AMI being hardened by our security department which turns off the following setting in this file /etc/sysctl.conf:
net.ipv4.ip_forward = 0
After switching this setting to:
net.ipv4.ip_forward = 1 and rebooting the EC2 Node, everything started working properly. Hope this helps!

Kubernetes Deployment: Error: failed to create deployment:

Environment Details:
Kubernetes version: `v1.20.2`
Master Node: `Bare Metal/Host OS: CentOS 7`
Worker Node: `VM/Host OS: CentOS 7`
I have installed & configured the Kubernetes cluster, the Master node on the bare metal server & the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) & centos7. Successfully joined worker node to master, below is the get nodes status.
$ kubectl get nodes
**NAME STATUS ROLES AGE VERSION
k8s-worker-node1 Ready <none> 2d2h v1.20.2
master-node Ready control-plane,master 3d4h v1.20.2**
While creating a deployment on the worker node I am getting the below error message.
On worker node, I issued the following command.
$ kubectl create deployment nginx-depl --image=nginx
Error message is:
error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host
please help me to resolve this issue as I am not able to understand what is the problem.
May you have to run minikube start before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.
This worked for me.
It seems that you are issuing the kubectl create deployment command on the worker node. This won't work because the kubectl command communicates with the kub-apiserver for cluster communication. Since the apiserver does not run on the worker node executing the command on it will raise an error.
Instead execute the same kubectl command on the master node as a non-root user with the following additional commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl create deployment nginx-depl --image=nginx

Kubernetes Flannel k8s_install-cni_kube-flannel-ds exited on worker node

I am setting up my very first Kubernetes cluster. We are expecting to have mix of Windows and Linux node so I picked flannel as my cni. I am using RHEL 7.7 as my master node and I have two other RHEL 7.7 machines as worker node and then rest are Windows Server 2019. For most of the part, I was following documentation provided on Microsoft site: https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows and also one on Kubernetes site: https://kubernetes.cn/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/ . I know article on Microsoft site is more than 2 years old but this is only the guide I found for mixed mode operations.
I have done following so far on Master and worker RHEL nodes:
stopped and disabled firewalld
disabled selinux
update && upgrade
Disabled swap partition
Added /etc/hosts entry for all nodes involved in my Kubernetes cluster
Installed Docker CE 19.03.11
Install kubectl, kubeadm and kubelet 1.18.3 (Build date 2020-05-20)
Prepare Kubernetes control plane for Flannel: sudo sysctl net.bridge.bridge-nf-call-iptables=1
I have now done following on RHEL Master node
Initialize cluster
kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
kubectl as non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Patch the Daemon set for the node selector
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml
kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system
After the patch, kube-proxy looks like this:
Add Flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Modify the net-conf.json section of the flannel manifest in order to set the VNI to 4096 and the Port to 4789. It should look as follows:
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"VNI" : 4096,
"Port": 4789
}
}
Apply modified kube-flannel
kubectl apply -f kube-flannel.yml
After adding network, here is what I get for pods in kube-system
Add Windows Flannel and kube-proxy DaemonSets
curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/v1.18.0/g' | kubectl apply -f -
kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml
Join Worker node
I am now trying to join the RHEL 7.7 worker node by executing the kubeadm join command generated when IU initialized my cluster.
Worker node initializes fine as seen below:
when I go to my RHEL worker node, I see that k8s_install-cni_kube-flannel-ds-amd64-f4mtp_kube-system container is exited as seen below:
Can you please let me know if I am following the correct procedure? I believe Flannel CNI is required to talk to pods within kubernetes cluster
If Flannel is difficult to setup for mixed mode, can we use other network which can work?
If we decide to go only and only RHEL nodes, what is the best and easiest network plugin I can install without going through lot of issues?
Thanks and I appreciate it.
There are a lot of materials about Kubernetes on official site and I encourage you to check it out:
Kubernetes.io
I divided this answer on parts:
CNI
Troubleshooting
CNI
What is CNI?
CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.
-- Github.com: Containernetworking: CNI
Your CNI plugin in simple terms is responsible for pod's networking inside your cluster.
There are multiple CNI plugins like:
Flannel
Calico
Multus
Weavenet
What I mean about that, you don't need to use Flannel. You can use other plugin like Calico. The major consideration is that they are different from each other and you should pick option best for your use case (support for some feature for example).
There are a lot of materials/resources on this topic. Please take a look at some of them:
Youtube.com: Kubernetes and the CNI: Where We Are and What's Next - Casey Callendrello, CoreOS
Youtube.com: Container Network Interface (CNI) Explained in 7 Minutes
Kubernetes.io: Docs: Concepts: Cluster administration: Networking
As for:
If Flannel is difficult to setup for mixed mode, can we use other network which can work?
If you mean mixed mode by using nodes that are Windows and Linux machines, I would stick to guides that are already written like one you mentioned: Kubernetes.io: Adding Windows nodes
As for:
If we decide to go only and only RHEL nodes, what is the best and easiest network plugin I can install without going through lot of issues?
The best way to choose CNI plugin would entail looking for solution fitting your needs the most. You can follow this link for an overview:
Kubernetes.io: Docs: Concepts: Cluster administration: Networking
Also you can look here (Please have in mind that this article is from 2018 and could be outdated):
Itnext.io: Benchmark results of Kubernetes network plugin cni over 10gbit's network
Troubleshooting
when I go to my RHEL worker node, I see that k8s_install-cni_kube-flannel-ds-amd64-f4mtp_kube-system container is exited as seen below:
Your k8s_install-cni_kube-flannel-ds-amd64-f4mtp_kube-system container exited with status 0 which should indicate correct provisioning.
You can check the logs of flannel pods by invoking below command:
kubectl logs POD_NAME
You can also refer to official documentation of Flannel: Github.com: Flannel: Troubleshooting
As I said in the comment:
To check if your CNI is working you can spawn 2 pods on 2 different nodes and try make a connection between them (like ping them).
Steps:
Spawn pods
Check their IP addresses
Exec into pods
Ping
Spawn pods
Below is example deployment definition that will spawn ubuntu pods. They will be used to check if pods have communication between nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 5
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:latest
command:
- sleep
- infinity
Please have in mind that this example is for testing purposes only. Apply above definition with:
kubectl apply -f FILE_NAME.yaml
Check their IP addresses
After pods spawned you should be able to run command:
$ kubectl get pods -o wide
and see output similar to this:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ubuntu-557dc88445-lngt7 1/1 Running 0 8s 10.20.0.4 NODE-1 <none> <none>
ubuntu-557dc88445-nhvbw 1/1 Running 0 8s 10.20.0.5 NODE-1 <none> <none>
ubuntu-557dc88445-p8v86 1/1 Running 0 8s 10.20.2.4 NODE-2 <none> <none>
ubuntu-557dc88445-vm2kg 1/1 Running 0 8s 10.20.1.9 NODE-3 <none> <none>
ubuntu-557dc88445-xwt86 1/1 Running 0 8s 10.20.0.3 NODE-1 <none> <none>
You can see from above output:
what IP address each pod has
what node is assigned to each pod.
By above example we will try to make a connection between:
ubuntu-557dc88445-lngt7 (first one) with ip address of 10.20.0.4 on the NODE-1
ubuntu-557dc88445-p8v86 (third one) with ip address of 10.20.2.4 on the NODE-2
Exec into pods
You can exec into the pod to run commands:
$ kubectl exec -it ubuntu-557dc88445-lngt7 -- /bin/bash
Please take a look on official documentation here: Kubernetes.io: Get shell running container
Ping
Ping was not built into the ubuntu image but you can install it with:
$ apt update && apt install iputils-ping
After that you can ping the 2nd pod and check if you can connect to another pod:
root#ubuntu-557dc88445-lngt7:/# ping 10.20.2.4 -c 4
PING 10.20.2.4 (10.20.2.4) 56(84) bytes of data.
64 bytes from 10.20.2.4: icmp_seq=1 ttl=62 time=0.168 ms
64 bytes from 10.20.2.4: icmp_seq=2 ttl=62 time=0.169 ms
64 bytes from 10.20.2.4: icmp_seq=3 ttl=62 time=0.174 ms
64 bytes from 10.20.2.4: icmp_seq=4 ttl=62 time=0.206 ms
--- 10.20.2.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3104ms
rtt min/avg/max/mdev = 0.168/0.179/0.206/0.015 ms

The connection to the server localhost:8080 was refused

I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
From master (node1),
$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
From worker (node 2),
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
I am not sure how to fix this issue. Any help is appreciated.
On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.
Thanks!
kubectl interfaces with kube-apiserver for cluster management. The command works on the master node because that's where kube-apiserver runs. On the worker nodes, only kubelet and kube-proxy is running.
In fact, kubectl is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.
from master you need ~/.kube/config pass this file as argument for kubectl command. Copy the config file to other server or laptop then pass the argument as for kubectl command
eg:
kubectl --kubeconfig=~/.kube/config
This worked for me after executing following commands:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
As a hint, the message being prompted indicates its related to network.
So one potential answer could also be, which worked for my resolution, is to have a look at the key cluster value for context within contexts.
My error was that I had placed an incorrect cluster name there.
Having the appropriate cluster name is crucial to finding it for respective context and the error will disappear.
To solve the issue The connection to the server localhost:8080 was refused - did you specify the right host or port?, you may be missing a step.
My Fix:
On MacOS if you install K8s with brew, you still need to brew install minikube, afterwards you should run minikube start. This will start your cluster.
Run the command kubectl cluster-info and you should get a happy path response similar to:
Kubernetes control plane is running at https://127.0.0.1:63000
KubeDNS is running at https://127.0.0.1:63308/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes install steps: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
Minikube docs: https://minikube.sigs.k8s.io/docs/start/
Ensure what context is selected if you're running Kubernetes in the Docker Desktop.
Once you've selected it right, you'll be able to run the kubectl commands without any exception:
% kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 2d11h v1.22.5

Installing a Kubernetes pod network for cluster nodes hosted on VirtualBox VMs

On OS X 10.11.6, I created 4 CentOS 7 VMs each with two interfaces ( One NAT, and one Host-only network.) in VirtualBox. Each VM's host-only interface receives an IP via DCHCP and DNS via dnsmasq.
OS X is running dnsmasq configure via a /usr/local/etc/dnsmasq.conf file that contains:
interface=vboxnet0
bind-interfaces
dhcp-range=vboxnet0,192.168.56.100,192.168.56.200,255.255.255.0,infinite
dhcp-leasefile=/usr/local/etc/dnsmasq.leases
local=/dev/
expand-hosts
domain=dev
address=/kube-master.dev/192.168.56.100
address=/kube-minion1.dev/192.168.56.101
address=/kube-minion2.dev/192.168.56.102
address=/kube-minion3.dev/192.168.56.103
address=/vbox-host.dev/192.168.56.1
dhcp-host=08:00:27:09:48:16,192.168.56.100
dhcp-host=0a:00:27:00:00:00,192.168.56.1
dhcp-host=08:00:27:95:AE:39,192.168.56.101
dhcp-host=08:00:27:97:C9:D4,192.168.56.102
dhcp-host=08:00:27:9B:AD:B5,192.168.56.103
I can ssh into each VM through their respective host-only adapter's associated address (e.g., kube-master.dev, kube-minion1.dev, kube-minion2.dev, kube-minion3.dev), and then
yum update
skipping a few steps, get to the point of installing kubeadm as per http://kubernetes.io/docs/getting-started-guides/kubeadm/, that is:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni ebtables
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
Then it is unclear to me if the following is correct but on kube-master.dev I execute
kubeadm init --api-advertise-addresses=192.168.56.100 --api-external-dns-names=kube-master.dev
And then on each minion execute:
rm -Rf /etc/kubernetes/manifests/
kubeadm join --token=e7cd12.68011e93d5db7670 192.168.56.100
On kube-master.dev, I then run
kubectl get nodes
to verify the each node has joined the cluster.
The command returns:
NAME STATUS AGE
kube-master.dev Ready 44m
kube-minion1.dev Ready 40m
kube-minion2.dev Ready 39m
kube-minion3.dev Ready 39m
indicating things are groovy.
Afterward, things go entirely off the rail when I attempt to install a pod network.
On kube-master.dev, I run:
kubectl apply -f https://git.io/weave-kube
to install Weave Net, and once the POD network is installed I start monitoring that network is working via executing:
watch kubectl get pods --all-namespaces
And
kube-dns-654381707-05i1t 0/3
never moves off of zero.
So please what am I doing wrong? I've hammered at this for days. The kubeadm documentation is a bit thin in a few place, so I'm not sure I init'ed the master correctly, and installing the pod network is bit conjecture on my part. Also, I haven't found a tutorial other than the Kubernetes kubeadm and the associated youtube video documenting the use of kubeadm to set up a kubernetes cluster.