I am using a windows laptop where a vagrant box is installed, where I have a kubectl client that manages some external kubernetes cluster.
For debugging purposes I would like to do a port-forwarding via kubectl and access this port from the host machine. This works perfectly from inside vagrant to the kubernetes cluster, but obviously something doesn't work in conjunction with the vagrant port forwarding from host to vagrant.
Here my setup:
Port-Forwarding in Vagrant:
config.vm.network "forwarded_port", guest: 8080, host: 8080, auto_correct:false
start nginx container in kubernetes:
kubectl run -i -t --image nginx test
forward port to localhost (inside vagrant):
kubectl port-forward test-64585bfbd4-zxpsd 8080:80
test nginx running inside vagrant-box:
vagrant#csbox:~$ curl http://localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Works.
Now going a level up - on the windows host:
PS U:\> Invoke-WebRequest http://localhost:8080
Invoke-WebRequest : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ Invoke-WebRequest http://localhost:8080
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
Works Not.
From my understanding - just looking at the port forwardings everything should be okay. Do you have any ideas why this doesn't work like expected?
By default, kubectl port-forward binds to the address 127.0.0.1. That's why you are not able to access it outside vagrant. The solution is to make kubectl port-forward to bind to 0.0.0.0 using the argument --address 0.0.0.0
Running the command:
kubectl port-forward test-64585bfbd4-zxpsd --address 0.0.0.0 8080:80
will solve your issue.
kubectl port-forward binds to 127.0.0.1 and doesn't allow you to define a bind address. The traffic from your Windows host machine hits the main network interface of your Vagrant VM and therefore, this doesn't work. You can fix the issue by routing traffic from the Vagrant VM's main network interface to the loopback interface using iptables:
`
Forward traffic from your vagrant VM's main network interface to 127.0.0.1 (replace $PORT with the port you're forwarding):
$ $ iptables -t nat -I PREROUTING -p tcp --dport $PORT -j DNAT --to-destination 127.0.0.1:$PORT
Look up the name of your Vagrant VM's main network interface:
$ ifconfig
enp0s3 Link encap:Ethernet HWaddr 02:38:b8:f5:60:7e
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::38:b8ff:fef5:607e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1106 errors:0 dropped:0 overruns:0 frame:0
TX packets:736 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:423190 (423.1 KB) TX bytes:80704 (80.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
As forwarding traffic to the loopback interface is disabled per default, enable forwarding to the loopback interface (replace $MAIN_NETWORK_INTERFACE_NAME with the interface name, in the example above enp0s3):
sysctl -w net.ipv4.conf.$MAIN_NETWORK_INTERFACE_NAME.route_localnet=1
Related
I am new to Kubernetes, so some of my questions may be basic.
My setup: 2 VM (running Ubuntu 16.04.2)
Kubernetes Version: 1.7.1 on both Master Node(kube4local) and Slave Node(kube5local)
My Steps: 1.
On both Master and Slave Nodes, installed the required kubernetes(kubelet
kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages.
On the Master Node: 1.
vagrant#kube4local:~$ sudo kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube4local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1051.552012 seconds
[token] Using token: 3c68b6.8c3f8d5a0a29a3ac
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
vagrant#kube4local:~$ mkdir -p $HOME/.kube
vagrant#kube4local:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant#kube4local:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant#kube4local:~$ sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
On the Slave Node:
Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.
Ran the join command.
Output of join command in slave node:
vagrant#kube5local:~$ sudo kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
Why i get this error, my /etc/hosts correct:
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
Output of Status Commands On the Master Node:
vagrant#kube4local:~$ sudo kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443
vagrant#kube4local:~$ sudo kubectl get nodes
NAME STATUS AGE VERSION
kube4local Ready 26m v1.7.1
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Output of ifconfig on Master Node(kube4local):
vagrant#kube4local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:3a:c4:00:50
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:260314 errors:0 dropped:0 overruns:0 frame:0
TX packets:58921 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:334293914 (334.2 MB) TX bytes:3918136 (3.9 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:b8:ef:b6
inet addr:192.168.56.104 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:247 errors:0 dropped:0 overruns:0 frame:0
TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36412 (36.4 KB) TX bytes:25999 (25.9 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:19922 errors:0 dropped:0 overruns:0 frame:0
TX packets:19922 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1996565 (1.9 MB) TX bytes:1996565 (1.9 MB)
Output of /etc/hosts on Master Node(kube4local):
vagrant#kube4local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Output of ifconfig on Slave Node(kube5local):
vagrant#kube5local:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:bb:37:ab:35
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
enp0s3 Link encap:Ethernet HWaddr 08:00:27:19:2c:a4
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:163514 errors:0 dropped:0 overruns:0 frame:0
TX packets:39792 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:207478954 (207.4 MB) TX bytes:2660902 (2.6 MB)
enp0s8 Link encap:Ethernet HWaddr 08:00:27:6a:f0:51
inet addr:192.168.56.105 Bcast:192.168.56.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:195 errors:0 dropped:0 overruns:0 frame:0
TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30463 (30.4 KB) TX bytes:26737 (26.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Output of /etc/hosts on Slave Node(kube4local):
vagrant#kube5local:~$ cat /etc/hosts
192.168.56.104 kube4local kube4local
192.168.56.105 kube5local kube5local
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Nat this is bug in version v1.7.1. you can use v1.7.0 version or skip the pre-flight check.
kubeadm join --skip-preflight-checks
you can refer this thread for more details.
kubernets v1.7.1 kubeadm join hostname "" could not be reached error
I am new to Kubernetes, so some of my questions may be basic.
NOTE: REMOVED http::// and https::// URL references in commands and output below, since there is a limit to number of URLs in a question.
My setup:
1 Physical host machine(running Ubuntu 16.04), with bridge networking enabled.
2 Ubuntu 16.04 VMs(Virtual Machines), VM1 is Master Node. VM2 is Slave Node.
I have a router, so behind the router both VMs get local IP address(ie not public IP address).
Since I am on corporate network, I also have proxy settings.
I have browser, apt, curl and wget applications working fine. Able to ping between VM1 and VM2.
Kubernetes Version: 1.7.0 on both Master Node(Virtual Machine-VM1) and Slave Node(Virtual Machine-VM2)
My Steps:
1. On both Master and Slave Nodes, installed the required kubernetes(kubelet kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages.
On the Master Node:
1. On the Master Node, when I run kubeadmin init, I was getting the following tcp timeout error:
sudo kubeadm init --apiserver-advertise-address=192.168.1.104 --pod-network-cidr=10.244.0.0/16 –skip-preflight- -checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
unable to get URL "storage.googleapis.com/kubernetes-release/release/stable-1.7.txt": Get storage.googleapis.com/kubernetes-release/release/stable-1.7.txt: dial tcp 172.217.3.208:443: i/o timeout
So tried specifying the kubernetes version, since I read that this prevents fetch from external website, and with that kubeadmin init was successful.
sudo kubeadm init --kubernetes-version v1.7.0 --apiserver-advertise-address=192.168.1.104 --pod-network-cidr=10.244.0.0/16 --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[certificates] Using the existing CA certificate and key.
[certificates] Using the existing API Server certificate and key.
[certificates] Using the existing API Server kubelet client certificate and key.
[certificates] Using the existing service account token signing key.
[certificates] Using the existing front-proxy CA certificate and key.
[certificates] Using the existing front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 14.009367 seconds
[token] Using token: ec4877.23c06ac2adf9d66c
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token ec4877.23c06ac2adf9d66c 192.168.1.104:6443
Ran the below commands and they went through fine.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Tried to deploy a pod network to the cluster, but fails with the same tcp timeout error:
kubectl apply -f
docs.projectcalico.org/v2.3/ getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
Unable to connect to the server: dial tcp 151.101.0.133:80: i/o timeout
Downloaded the calico.yaml file using browser, and ran the command, it was successful.
skris14#skris14-ubuntu16:~/Downloads$ sudo kubectl apply -f ~/Downloads/calico.yaml
configmap "calico-config" created
daemonset "calico-etcd" created
service "calico-etcd" created
daemonset "calico-node" created
deployment "calico-policy-controller" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-policy-controller" created
clusterrole "calico-policy-controller" created
serviceaccount "calico-policy-controller" created
On the Slave Node:
Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.
Ran the join command, and it fails trying to get cluster info.
Output of join command in slave node:
skris14#sudha-ubuntu-16:~$ sudo kubeadm join --token ec4877.23c06ac2adf9d66c 192.168.1.104:6443
[sudo] password for skris14:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.1.104:6443"
[discovery] Created cluster-info discovery client, requesting info from "192.168.1.104:6443"
[discovery] Failed to request cluster info, will try again: [Get 192.168.1.104:6443/: EOF]
^C
Output of Status Commands On the Master Node:
skris14#skris14-ubuntu16:~/Downloads$
kubectl get nodes
NAME STATUS AGE VERSION
skris14-ubuntu16.04-vm1 Ready 5d v1.7.0
skris14#skris14-ubuntu16:~/Downloads$ kubectl cluster-info
Kubernetes master is running at 192.168.1.104:6443
KubeDNS is running at 192.168.1.104:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
skris14#skris14-ubuntu16:~/Downloads$ kubectl get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-etcd-2lt0c 1/1 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
calico-node-pp1p9 2/2 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
calico-policy-controller-1727037546-m6wqt 1/1 Running 0 14m 192.168.1.104 skris14-ubuntu16.04-vm1
etcd-skris14-ubuntu16.04-vm1 1/1 Running 1 5d 192.168.1.104 skris14-ubuntu16.04-vm1
kube-apiserver-skris14-ubuntu16.04-vm1 1/1 Running 0 3m 192.168.1.104 skris14-ubuntu16.04-vm1
kube-controller-manager-skris14-ubuntu16.04-vm1 1/1 Running 0 4m 192.168.1.104 skris14-ubuntu16.04-vm1
kube-dns-2425271678-b05v8 0/3 Pending 0 4m
kube-dns-2425271678-ljsv1 0/3 OutOfcpu 0 5d skris14-ubuntu16.04-vm1
kube-proxy-40zrc 1/1 Running 1 5d 192.168.1.104 skris14-ubuntu16.04-vm1
kube-scheduler-skris14-ubuntu16.04-vm1 1/1 Running 5 5d 192.168.1.104 skris14-ubuntu16.04-vm1
Output of ifconfig on Master Node(Virtual Machine1):
skris14#skris14-ubuntu16:~/
docker0 Link encap:Ethernet HWaddr 02:42:7f:ee:8e:b7
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens3 Link encap:Ethernet HWaddr 52:54:be:36:42:a6
inet addr:192.168.1.104 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::c60c:647d:1d9d:aca1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:184500 errors:0 dropped:35 overruns:0 frame:0
TX packets:92411 errors:0 dropped:0 overruns:0 carrier:0
collisions:458827 txqueuelen:1000
RX bytes:242793144 (242.7 MB) TX bytes:9162254 (9.1 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:848277 errors:0 dropped:0 overruns:0 frame:0
TX packets:848277 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:211936528 (211.9 MB) TX bytes:211936528 (211.9 MB)
tunl0 Link encap:IPIP Tunnel HWaddr
inet addr:192.168.112.192 Mask:255.255.255.255
UP RUNNING NOARP MTU:1440 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Output of ifconfig on Slave Node(Virtual Machine2):
skris14#sudha-ubuntu-16:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:69:5e:2d:22
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens3 Link encap:Ethernet HWaddr 52:54:be:36:42:b6
inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::cadb:b714:c679:955/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:72280 errors:0 dropped:0 overruns:0 frame:0
TX packets:36977 errors:0 dropped:0 overruns:0 carrier:0
collisions:183622 txqueuelen:1000
RX bytes:98350159 (98.3 MB) TX bytes:3431313 (3.4 MB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1340 errors:0 dropped:0 overruns:0 frame:0
TX packets:1340 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:130985 (130.9 KB) TX bytes:130985 (130.9 KB)
discovery] Failed to request cluster info, will try again: [Get
192.168.1.104:6443/: EOF]
your error message shows slave not able to connect to master api server. check these items.
make sure api server running on port 6443
check the routes on both servers.
check the firewall rules on your hosts and router.
Most likely you get time out because join token expired, is no longer valid or does not exist on master node. If that is the case then you will not be able to join the cluster. What you have to do is to create new token on master node and use it in your kubeadm join command. More details in this
solution.
I am new to Kubernetes and i have been browsing looking and reading why my external ip is not resolving.
I am running minikube on a ubuntu 16.04 distro.
In the services overview of the dashboard i have this
my-nginx | run: my-nginx | 10.0.0.11 | my-nginx:80 TCP my-nginx:32431 | TCP 192.168.42.71:80
When i do an http get at http://192.168.42.165:32431/ i get the nginx page.
The configuration of the service is as follows
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2016-09-23T12:11:13Z
labels:
run: my-nginx
name: my-nginx
namespace: default
resourceVersion: "4220"
selfLink: /api/v1/namespaces/default/services/my-nginx
uid: d24b617b-8186-11e6-a25b-9ed0bca2797a
spec:
clusterIP: 10.0.0.11
deprecatedPublicIPs:
- 192.168.42.71
externalIPs:
- 192.168.42.71
ports:
- nodePort: 32431
port: 80
protocol: TCP
targetPort: 80
selector:
run: my-nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
These are parts of my ifconfog
virbr0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4895 errors:0 dropped:0 overruns:0 frame:0
TX packets:8804 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:303527 (303.5 KB) TX bytes:12601315 (12.6 MB)
virbr1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74
inet addr:192.168.42.1 Bcast:192.168.42.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7462 errors:0 dropped:0 overruns:0 frame:0
TX packets:12176 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3357881 (3.3 MB) TX bytes:88555007 (88.5 MB)
vnet0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41
inet6 addr: fe80::fc54:ff:fe37:8f41/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4895 errors:0 dropped:0 overruns:0 frame:0
TX packets:21173 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:372057 (372.0 KB) TX bytes:13248977 (13.2 MB)
vnet1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74
inet addr:192.168.23.1 Bcast:0.0.0.0 Mask:255.255.255.255
inet6 addr: fe80::fc54:ff:fe9a:3974/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7462 errors:0 dropped:0 overruns:0 frame:0
TX packets:81072 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3462349 (3.4 MB) TX bytes:92936270 (92.9 MB)
Does anyone have some pointers, because i am lost?
Minikube doesn't support LoadBalancer services, so the service will never get an external IP.
But you can access the service anyway with its external port.
You can get the IP and PORT by running:
minikube service <service_name>
I assume you are using minikube in virtualbox (there was no info how do you start it and what is your host OS).
When you create a service with type=LoadBalancer you should also run minikube tunnel to expose LoadBalancers from cluster. Then when you run kubectl get svc you will get external IP of LoadBalancer. Still it's minikube's IP, so if you want to expose it externally from your machine you should put some reverseproxy or tunnel on your machine.
If you're running Minikube on windows just run:
minikube tunnel
Note: It must be run in a separate terminal window to keep the tunnel open.
The above command will tunnel your container to localhost. then you can get your service URL by:
kubectl get services [service name]
replace [service name] with your service name. don't forget to add a mapped port on the external IP endpoint.
Minikube External IP :
minikube doesn’t allow to access the external IP`s directly for the
service of a kind NodePort or LoadBalancer.
We don’t get the external IP to access the service on the local
system. So the good option is to use minikube IP
Use the below command to get the minikube IP once your service is exposed.
minikube service service-name --url
Now use that URL to serve your purpose.
TL;DR minikube has "addons" which you can use to handle ingress and load balancing. Just enable and configure one of those.
https://medium.com/faun/metallb-configuration-in-minikube-to-enable-kubernetes-service-of-type-loadbalancer-9559739787df
Problem:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
Example container ifconfig:
eth0 Link encap:Ethernet HWaddr F2:3D:87:30:39:B8
inet addr:10.2.8.64 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::f03d:87ff:fe30:39b8%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4088 (3.9 KiB) TX bytes:648 (648.0 B)
eth1 Link encap:Ethernet HWaddr 6E:1C:69:85:21:96
inet addr:172.16.28.63 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::6c1c:69ff:fe85:2196%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1418 (1.3 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1%32750/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Routing inside container:
/ # ip route show
10.2.0.0/16 via 10.2.8.1 dev eth0
10.2.8.0/24 dev eth0 src 10.2.8.73
172.16.28.0/24 via 172.16.28.1 dev eth1 src 172.16.28.72
172.16.28.1 dev eth1 src 172.16.28.72
Host iptables: http://pastebin.com/raw/UcLQQa4J
Host ifconfig: http://pastebin.com/raw/uxsM1bx6
logs by flannel:
main.go:275] Installing signal handlers
main.go:188] Using 104.238.xxx.xxx as external interface
main.go:189] Using 104.238.xxx.xxx as external endpoint
etcd.go:129] Found lease (10.2.8.0/24) for current IP (104.238.xxx.xxx), reusing
etcd.go:84] Subnet lease acquired: 10.2.8.0/24
ipmasq.go:50] Adding iptables rule: FLANNEL -d 10.2.0.0/16 -j ACCEPT
ipmasq.go:50] Adding iptables rule: FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
ipmasq.go:50] Adding iptables rule: POSTROUTING -s 10.2.0.0/16 -j FLANNEL
ipmasq.go:50] Adding iptables rule: POSTROUTING ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
vxlan.go:153] Watching for L3 misses
vxlan.go:159] Watching for new subnet leases
vxlan.go:273] Handling initial subnet events
device.go:159] calling GetL2List() dev.link.Index: 3
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:83:be:17:3e:d6
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:dd:90:b2:42:87
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx de:e8:be:28:cf:7a
systemd[1]: Started Network fabric for containers.
It is possible if you set a config map with upstreamNameServers.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["8.8.8.8", "8.8.8.4"]
And in you Deployment definition add:
dnsPolicy: "ClusterFirst"
More info here:
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers
It is not possible to make it work because it is not yet implemented...I guess I am switching to docker...
edit: ...or not, switched from flannel to calico, it works ok.
rkt #862
k8s #2249
This GitHub issue on the Flannel project may provide a solution - essentially, try disabling IP masquerading (--ip-masq=false) on your Docker daemon, and enabling it (--ip-masq) on your Flannel daemon.
This solution worked for me when I was unable to ping internet IPs (e.g. 8.8.8.8) from inside a container in my Kubernetes cluster.
Try to Check the Kube-flannel.yml file and also the starting command to create the cluster that is kubeadm init --pod-network-cidr=10.244.0.0/16 and by default in this file kube-flannel.yml you will get the 10.244.0.0/16 IP, so if you want to change the pod-network-CIDR then please change in the file also.
Recently installed centos virtual machine (vm player) in my windows 7 host.
I can ping my vm from internal network without any problem.
I can also reach internal network from my vm without issues.
But my vm cant access internet, I can't ping google for example or any other external network.
I tried several solutions, I spent more than a week trying to figure out what's the issue.
Configuration:
My VM is bridged and working in DHCP mode:
[root#localhost ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:2F:D7:52
inet addr:**172.31.44.128** Bcast:172.31.47.255 Mask:255.255.248.0
inet6 addr: fe80::20c:29ff:fe2f:d752/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15535 errors:0 dropped:0 overruns:0 frame:0
TX packets:503 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1099726 (1.0 MiB) TX bytes:38953 (38.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4098 (4.0 KiB) TX bytes:4098 (4.0 KiB)
[root#localhost ~]# more /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=localhost.localdomain
[root#localhost ~]# **
more /etc/sysconfig/network-scripts/ifcfg-eth0**
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=dhcp
DHCPCLASS=
HWADDR=00:0C:29:2F:D7:52
ONBOOT=yes
[root#localhost ~]# **
more /etc/resolv.conf**
; generated by /sbin/dhclient-script
search dhcp.city.country.company
nameserver 172.31.41.2
nameserver 172.17.25.22
nameserver 172.16.25.10
[root#localhost ~]#
**netstat -rn**
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.31.40.0 0.0.0.0 255.255.248.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.31.40.1 0.0.0.0 UG 0 0 0 eth0
I can ping my gateway, can ping my DNS and proxy also:
[root#localhost ~]#
ping 172.31.40.1
PING 172.31.40.1 (172.31.40.1) 56(84) bytes of data.
64 bytes from 172.31.40.1: icmp_seq=1 ttl=255 time=11.9 ms
64 bytes from 172.31.40.1: icmp_seq=2 ttl=255 time=1.18 ms
[root#localhost ~]# ping 172.31.41.2
PING 172.31.41.2 (172.31.41.2) 56(84) bytes of data.
64 bytes from 172.31.41.2: icmp_seq=1 ttl=128 time=1.75 ms
64 bytes from 172.31.41.2: icmp_seq=2 ttl=128 time=0.520 ms
64 bytes from 172.31.41.2: icmp_seq=3 ttl=128 time=0.580 ms
[root#localhost ~]# ping ptx.proxy.corp.company
PING lmarcproxy100.ptx.fr.company (10.7.80.40) 56(84) bytes of data.
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=1 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=2 ttl=246 time=40.1 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=3 ttl=246 time=40.2 ms
64 bytes from lmarcproxy100.ptx.fr.company (10.7.80.40): icmp_seq=4 ttl=246 time=40.2 ms
Network interface is up & running:
[root#localhost ~]# service network status
Configured devices:
lo eth0
Currently active devices:
lo eth0
Firewalls are Stopped:
[root#localhost ~]# service iptables status
Firewall is stopped.
[root#localhost ~]# service ip6tables status
Firewall is stopped.
What else? I can yum also!
But I can't connect to internet!
Thanks in advance for your help.
Try to ping to your nameserver ip addresses and try to ping to your gateway address. Disable the search... line in your resolv.conf