I've configured Flannel successfully on the worker nodes. When I do ifconfig, on the worker, I see a flannel.1 interface (I'm using vxlan). There are also docker0 and cbr0 interfaces.
However, when the pod comes up, the docker container on that node gets the IP address from the cbr0 interface and not from the flannel interface. I did try manually removing the cbr0 interface, but it comes back up when the docker container gets scheduled on the node where the pod shows up.
Docker is started this way:
dockerd --bip=10.200.50.1/24 --mtu=8951 --iptables=false --ip-masq=false --host=unix:///var/run/docker.sock --log-level=error --storage-driver=overlay
Flannel env:
$ cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.200.0.0/16
FLANNEL_SUBNET=10.200.50.1/24
FLANNEL_MTU=8951
FLANNEL_IPMASQ=false
ifconfig says:
$ ifconfig
cbr0 Link encap:Ethernet HWaddr 0a:58:0a:c8:04:01
inet addr:10.200.4.1 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::d99:edff:fec6:9dd0/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:536 (536.0 B) TX bytes:648 (648.0 B)
docker0 Link encap:Ethernet HWaddr 02:42:a4:4b:44:dc
inet addr:10.200.50.1 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 12:e7:81:c3:1e:58
inet addr:10.0.2.152 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe90::10e8:86ff:fec3:1e58/54 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:911006 errors:0 dropped:0 overruns:0 frame:0
TX packets:821093 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:725362580 (725.3 MB) TX bytes:155420170 (155.4 MB)
flannel.1 Link encap:Ethernet HWaddr 12:10:54:76:3e:c4
inet addr:10.200.50.0 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::1410:54ff:fe86:3ec4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:8951 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:11 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:27 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1624 (1.6 KB) TX bytes:1624 (1.6 KB)
How do I ensure that the pod's IP address is derived from the flannel interface?
you might want to check your kubernetes deployment to ensure that you are not using any network plugins since you are using flannel.
http://kubernetes.io/docs/admin/network-plugins/
Related
I am new to Kubernetes, so some of my questions may be basic.
My setup: 2 VM (running Ubuntu 16.04.2)
Kubernetes Version: 1.7.1 on both Master Node(kube4local) and Slave Node(kube5local)
My Steps: 1.
On both Master and Slave Nodes, installed the required kubernetes(kubelet
kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages.
On the Master Node: 1.
vagrant#kube4local:~$ sudo kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube4local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1051.552012 seconds
[token] Using token: 3c68b6.8c3f8d5a0a29a3ac
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
vagrant#kube4local:~$ mkdir -p $HOME/.kube
vagrant#kube4local:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant#kube4local:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant#kube4local:~$ sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
On the Slave Node:
Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.
Ran the join command.
Output of join command in slave node:
vagrant#kube5local:~$ sudo kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
Why i get this error, my /etc/hosts correct:
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
Output of Status Commands On the Master Node:
vagrant#kube4local:~$ sudo kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443
vagrant#kube4local:~$ sudo kubectl get nodes
NAME STATUS AGE VERSION
kube4local Ready 26m v1.7.1
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Output of ifconfig on Master Node(kube4local):
vagrant#kube4local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:3a:c4:00:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:260314 errors:0 dropped:0 overruns:0 frame:0
TX packets:58921 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:334293914 (334.2 MB) TX bytes:3918136 (3.9 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:b8:ef:b6
inet addr:192.168.56.104 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:247 errors:0 dropped:0 overruns:0 frame:0
TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36412 (36.4 KB) TX bytes:25999 (25.9 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:19922 errors:0 dropped:0 overruns:0 frame:0
TX packets:19922 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1996565 (1.9 MB) TX bytes:1996565 (1.9 MB)
Output of /etc/hosts on Master Node(kube4local):
vagrant#kube4local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Output of ifconfig on Slave Node(kube5local):
vagrant#kube5local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:bb:37:ab:35
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163514 errors:0 dropped:0 overruns:0 frame:0
TX packets:39792 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:207478954 (207.4 MB) TX bytes:2660902 (2.6 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:6a:f0:51
inet addr:192.168.56.105 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:195 errors:0 dropped:0 overruns:0 frame:0
TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30463 (30.4 KB) TX bytes:26737 (26.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Output of /etc/hosts on Slave Node(kube4local):
vagrant#kube5local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Nat this is bug in version v1.7.1. you can use v1.7.0 version or skip the pre-flight check.
kubeadm join --skip-preflight-checks
you can refer this thread for more details.
kubernets v1.7.1 kubeadm join hostname "" could not be reached error
I am new to Kubernetes, so some of my questions may be basic.
NOTE: REMOVED http::// and https::// URL references in commands and output below, since there is a limit to number of URLs in a question.
My setup:
1 Physical host machine(running Ubuntu 16.04), with bridge networking enabled.
2 Ubuntu 16.04 VMs(Virtual Machines), VM1 is Master Node. VM2 is Slave Node.
I have a router, so behind the router both VMs get local IP address(ie not public IP address).
Since I am on corporate network, I also have proxy settings.
I have browser, apt, curl and wget applications working fine. Able to ping between VM1 and VM2.
Kubernetes Version: 1.7.0 on both Master Node(Virtual Machine-VM1) and Slave Node(Virtual Machine-VM2)
My Steps:
1. On both Master and Slave Nodes, installed the required kubernetes(kubelet kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages.
On the Master Node:
1. On the Master Node, when I run kubeadmin init, I was getting the following tcp timeout error:
sudo kubeadm init --apiserver-advertise-address=192.168.1.104 --pod-network-cidr=10.244.0.0/16 –skip-preflight- -checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to get URL "storage.googleapis.com/kubernetes-release/release/stable-1.7.txt": Get storage.googleapis.com/kubernetes-release/release/stable-1.7.txt: dial tcp 172.217.3.208:443: i/o timeout
So tried specifying the kubernetes version, since I read that this prevents fetch from external website, and with that kubeadmin init was successful.
sudo kubeadm init --kubernetes-version v1.7.0 --apiserver-advertise-address=192.168.1.104 --pod-network-cidr=10.244.0.0/16 --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[certificates] Using the existing CA certificate and key.
[certificates] Using the existing API Server certificate and key.
[certificates] Using the existing API Server kubelet client certificate and key.
[certificates] Using the existing service account token signing key.
[certificates] Using the existing front-proxy CA certificate and key.
[certificates] Using the existing front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 14.009367 seconds
[token] Using token: ec4877.23c06ac2adf9d66c
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token ec4877.23c06ac2adf9d66c 192.168.1.104:6443
Ran the below commands and they went through fine.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Tried to deploy a pod network to the cluster, but fails with the same tcp timeout error:
kubectl apply -f
docs.projectcalico.org/v2.3/ getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
Unable to connect to the server: dial tcp 151.101.0.133:80: i/o timeout
Downloaded the calico.yaml file using browser, and ran the command, it was successful.
skris14#skris14-ubuntu16:~/Downloads$ sudo kubectl apply -f ~/Downloads/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-policy-controller" created
clusterrole "calico-policy-controller" created
serviceaccount "calico-policy-controller" created
On the Slave Node:
Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.
Ran the join command, and it fails trying to get cluster info.
Output of join command in slave node:
skris14#sudha-ubuntu-16:~$ sudo kubeadm join --token ec4877.23c06ac2adf9d66c 192.168.1.104:6443
[sudo] password for skris14:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.104:6443"
[discovery] Created cluster-info discovery client, requesting info from "192.168.1.104:6443"
[discovery] Failed to request cluster info, will try again: [Get 192.168.1.104:6443/: EOF]
^C
Output of Status Commands On the Master Node:
skris14#skris14-ubuntu16:~/Downloads$
kubectl get nodes
NAME STATUS AGE VERSION
skris14-ubuntu16.04-vm1 Ready 5d v1.7.0
skris14#skris14-ubuntu16:~/Downloads$ kubectl cluster-info
Kubernetes master is running at 192.168.1.104:6443
KubeDNS is running at 192.168.1.104:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
skris14#skris14-ubuntu16:~/Downloads$ kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-etcd-2lt0c 1/1 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
calico-node-pp1p9 2/2 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
calico-policy-controller-1727037546-m6wqt 1/1 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
etcd-skris14-ubuntu16.04-vm1 1/1 Running 1 5d 192.168.1.104 skris14-ubuntu16.04-vm1
kube-apiserver-skris14-ubuntu16.04-vm1 1/1 Running 0 3m 192.168.1.104 skris14-ubuntu16.04-vm1
kube-controller-manager-skris14-ubuntu16.04-vm1 1/1 Running 0 4m 192.168.1.104 skris14-ubuntu16.04-vm1
kube-dns-2425271678-b05v8 0/3 Pending 0 4m
kube-dns-2425271678-ljsv1 0/3 OutOfcpu 0 5d skris14-ubuntu16.04-vm1
kube-proxy-40zrc 1/1 Running 1 5d 192.168.1.104 skris14-ubuntu16.04-vm1
kube-scheduler-skris14-ubuntu16.04-vm1 1/1 Running 5 5d 192.168.1.104 skris14-ubuntu16.04-vm1
Output of ifconfig on Master Node(Virtual Machine1):
skris14#skris14-ubuntu16:~/
docker0 Link encap:Ethernet HWaddr 02:42:7f:ee:8e:b7
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens3 Link encap:Ethernet HWaddr 52:54:be:36:42:a6
inet addr:192.168.1.104 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::c60c:647d:1d9d:aca1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:184500 errors:0 dropped:35 overruns:0 frame:0
TX packets:92411 errors:0 dropped:0 overruns:0 carrier:0
collisions:458827 txqueuelen:1000
RX bytes:242793144 (242.7 MB) TX bytes:9162254 (9.1 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:848277 errors:0 dropped:0 overruns:0 frame:0
TX packets:848277 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:211936528 (211.9 MB) TX bytes:211936528 (211.9 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.112.192 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Output of ifconfig on Slave Node(Virtual Machine2):
skris14#sudha-ubuntu-16:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:69:5e:2d:22
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens3 Link encap:Ethernet HWaddr 52:54:be:36:42:b6
inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::cadb:b714:c679:955/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:72280 errors:0 dropped:0 overruns:0 frame:0
TX packets:36977 errors:0 dropped:0 overruns:0 carrier:0
collisions:183622 txqueuelen:1000
RX bytes:98350159 (98.3 MB) TX bytes:3431313 (3.4 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1340 errors:0 dropped:0 overruns:0 frame:0
TX packets:1340 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:130985 (130.9 KB) TX bytes:130985 (130.9 KB)
discovery] Failed to request cluster info, will try again: [Get
192.168.1.104:6443/: EOF]
your error message shows slave not able to connect to master api server. check these items.
make sure api server running on port 6443
check the routes on both servers.
check the firewall rules on your hosts and router.
Most likely you get time out because join token expired, is no longer valid or does not exist on master node. If that is the case then you will not be able to join the cluster. What you have to do is to create new token on master node and use it in your kubeadm join command. More details in this
solution.
I am new to Kubernetes and i have been browsing looking and reading why my external ip is not resolving.
I am running minikube on a ubuntu 16.04 distro.
In the services overview of the dashboard i have this
my-nginx | run: my-nginx | 10.0.0.11 | my-nginx:80 TCP my-nginx:32431 | TCP 192.168.42.71:80
When i do an http get at http://192.168.42.165:32431/ i get the nginx page.
The configuration of the service is as follows
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-09-23T12:11:13Z
labels:
run: my-nginx
name: my-nginx
namespace: default
resourceVersion: "4220"
selfLink: /api/v1/namespaces/default/services/my-nginx
uid: d24b617b-8186-11e6-a25b-9ed0bca2797a
spec:
clusterIP: 10.0.0.11
deprecatedPublicIPs:
- 192.168.42.71
externalIPs:
- 192.168.42.71
ports:
- nodePort: 32431
port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
These are parts of my ifconfog
virbr0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4895 errors:0 dropped:0 overruns:0 frame:0
TX packets:8804 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:303527 (303.5 KB) TX bytes:12601315 (12.6 MB)
virbr1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74
inet addr:192.168.42.1 Bcast:192.168.42.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7462 errors:0 dropped:0 overruns:0 frame:0
TX packets:12176 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3357881 (3.3 MB) TX bytes:88555007 (88.5 MB)
vnet0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41
inet6 addr: fe80::fc54:ff:fe37:8f41/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4895 errors:0 dropped:0 overruns:0 frame:0
TX packets:21173 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:372057 (372.0 KB) TX bytes:13248977 (13.2 MB)
vnet1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74
inet addr:192.168.23.1 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::fc54:ff:fe9a:3974/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7462 errors:0 dropped:0 overruns:0 frame:0
TX packets:81072 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3462349 (3.4 MB) TX bytes:92936270 (92.9 MB)
Does anyone have some pointers, because i am lost?
Minikube doesn't support LoadBalancer services, so the service will never get an external IP.
But you can access the service anyway with its external port.
You can get the IP and PORT by running:
minikube service <service_name>
I assume you are using minikube in virtualbox (there was no info how do you start it and what is your host OS).
When you create a service with type=LoadBalancer you should also run minikube tunnel to expose LoadBalancers from cluster. Then when you run kubectl get svc you will get external IP of LoadBalancer. Still it's minikube's IP, so if you want to expose it externally from your machine you should put some reverseproxy or tunnel on your machine.
If you're running Minikube on windows just run:
minikube tunnel
Note: It must be run in a separate terminal window to keep the tunnel open.
The above command will tunnel your container to localhost. then you can get your service URL by:
kubectl get services [service name]
replace [service name] with your service name. don't forget to add a mapped port on the external IP endpoint.
Minikube External IP :
minikube doesn’t allow to access the external IP`s directly for the
service of a kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So the good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
TL;DR minikube has "addons" which you can use to handle ingress and load balancing. Just enable and configure one of those.
https://medium.com/faun/metallb-configuration-in-minikube-to-enable-kubernetes-service-of-type-loadbalancer-9559739787df
Problem:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
Example container ifconfig:
eth0 Link encap:Ethernet HWaddr F2:3D:87:30:39:B8
inet addr:10.2.8.64 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::f03d:87ff:fe30:39b8%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4088 (3.9 KiB) TX bytes:648 (648.0 B)
eth1 Link encap:Ethernet HWaddr 6E:1C:69:85:21:96
inet addr:172.16.28.63 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::6c1c:69ff:fe85:2196%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1418 (1.3 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1%32750/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Routing inside container:
/ # ip route show
10.2.0.0/16 via 10.2.8.1 dev eth0
10.2.8.0/24 dev eth0 src 10.2.8.73
172.16.28.0/24 via 172.16.28.1 dev eth1 src 172.16.28.72
172.16.28.1 dev eth1 src 172.16.28.72
Host iptables: http://pastebin.com/raw/UcLQQa4J
Host ifconfig: http://pastebin.com/raw/uxsM1bx6
logs by flannel:
main.go:275] Installing signal handlers
main.go:188] Using 104.238.xxx.xxx as external interface
main.go:189] Using 104.238.xxx.xxx as external endpoint
etcd.go:129] Found lease (10.2.8.0/24) for current IP (104.238.xxx.xxx), reusing
etcd.go:84] Subnet lease acquired: 10.2.8.0/24
ipmasq.go:50] Adding iptables rule: FLANNEL -d 10.2.0.0/16 -j ACCEPT
ipmasq.go:50] Adding iptables rule: FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
ipmasq.go:50] Adding iptables rule: POSTROUTING -s 10.2.0.0/16 -j FLANNEL
ipmasq.go:50] Adding iptables rule: POSTROUTING ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
vxlan.go:153] Watching for L3 misses
vxlan.go:159] Watching for new subnet leases
vxlan.go:273] Handling initial subnet events
device.go:159] calling GetL2List() dev.link.Index: 3
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:83:be:17:3e:d6
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:dd:90:b2:42:87
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx de:e8:be:28:cf:7a
systemd[1]: Started Network fabric for containers.
It is possible if you set a config map with upstreamNameServers.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["8.8.8.8", "8.8.8.4"]
And in you Deployment definition add:
dnsPolicy: "ClusterFirst"
More info here:
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers
It is not possible to make it work because it is not yet implemented...I guess I am switching to docker...
edit: ...or not, switched from flannel to calico, it works ok.
rkt #862
k8s #2249
This GitHub issue on the Flannel project may provide a solution - essentially, try disabling IP masquerading (--ip-masq=false) on your Docker daemon, and enabling it (--ip-masq) on your Flannel daemon.
This solution worked for me when I was unable to ping internet IPs (e.g. 8.8.8.8) from inside a container in my Kubernetes cluster.
Try to Check the Kube-flannel.yml file and also the starting command to create the cluster that is kubeadm init --pod-network-cidr=10.244.0.0/16 and by default in this file kube-flannel.yml you will get the 10.244.0.0/16 IP, so if you want to change the pod-network-CIDR then please change in the file also.
Basic question. I have memcached installed on my EC2 instance.
It has an elastic IP address of, say, 101.45.23.18, and ifconfig shows
eth0 Link encap:Ethernet HWaddr 12:31:3b:0e:3a:8f
inet addr:10.241.13.121 Bcast:10.241.13.255 Mask:255.255.255.0
inet6 addr: fe80::1031:3bff:fe0e:3a8f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2654 errors:0 dropped:0 overruns:0 frame:0
TX packets:1717 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1806365 (1.8 MB) TX bytes:206184 (206.1 KB)
Interrupt:26
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
How do I start memcached to listen on the external IP address rather than the internal one?
Solved it. Supereasy - you can't bind directly to the elastic IP address, but you can bind to 0.0.0.0 which solves the problem.