Exposing bare metal kubernetes cluster to internet - kubernetes

I am trying to setup own single-node kubernetes cluster on bare metal dedicated server. I am not that experienced in dev-ops but I need some service to be deployed for my own project. I already did a cluster setup with juju and conjure-up kubernetes over LXD. I have running cluster pretty fine.
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
conjure-canonical-kubern-3b3 conjure-up-localhost-db9 localhost/localhost 2.4.3 unsupported 23:49:09Z
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 195 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 338 ubuntu
flannel 0.10.0 active 2 flannel jujucharms 351 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 525 ubuntu exposed
kubernetes-master 1.13.1 active 1 kubernetes-master jujucharms 542 ubuntu
kubernetes-worker 1.13.1 active 1 kubernetes-worker jujucharms 398 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.213.117.66 Certificate Authority connected.
etcd/0* active idle 1 10.213.117.171 2379/tcp Healthy with 3 known peers
etcd/1 active idle 2 10.213.117.10 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 10.213.117.238 2379/tcp Healthy with 3 known peers
kubeapi-load-balancer/0* active idle 4 10.213.117.123 443/tcp Loadbalancer ready.
kubernetes-master/0* active idle 5 10.213.117.172 6443/tcp Kubernetes master running.
flannel/1* active idle 10.213.117.172 Flannel subnet 10.1.83.1/24
kubernetes-worker/0* active idle 7 10.213.117.136 80/tcp,443/tcp Kubernetes worker running.
flannel/4 active idle 10.213.117.136 Flannel subnet 10.1.27.1/24
Entity Meter status Message
model amber user verification pending
Machine State DNS Inst id Series AZ Message
0 started 10.213.117.66 juju-b03445-0 bionic Running
1 started 10.213.117.171 juju-b03445-1 bionic Running
2 started 10.213.117.10 juju-b03445-2 bionic Running
3 started 10.213.117.238 juju-b03445-3 bionic Running
4 started 10.213.117.123 juju-b03445-4 bionic Running
5 started 10.213.117.172 juju-b03445-5 bionic Running
7 started 10.213.117.136 juju-b03445-7 bionic Running
I also deployed Hello world application to output some hello on port 8080 inside the pod and nginx-ingress for it to re-route the traffic to this service on specified host.
NAME READY STATUS RESTARTS AGE
pod/hello-world-696b6b59bd-fznwr 1/1 Running 1 176m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/example-service NodePort 10.152.183.53 <none> 8080:30450/TCP 176m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 10h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-world 1/1 1 1 176m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-world-696b6b59bd 1 1 1 176m
When I do curl localhost as expected I have connection refused, which looks still fine as it's not exposed to cluster. when I curl the kubernetes-worker/0 with public address 10.213.117.136 on port 30450 (which I get from kubectl get all)
$ curl 10.213.117.136:30450
Hello Kubernetes!
Everything works like a charm (which is obvious). When I do
curl -H "Host: testhost.com" 10.213.117.136
Hello Kubernetes!
It works again like charm! That means ingress controller is successfully routing port 80 based on host rule to correct services. At this point I am 100% sure that cluster works as it should.
Now I am trying to access this service over the internet externally. When I load <server_ip> obviously nothing loads as it's living inside own lxd subnet. Therefore I was thinking forward port 80 from server eth0 to this IP. So I added this rule to iptables
sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.213.117.136 (For the sake of example let's route everything not only port 80). Now when I open on my computer http://<server_ip> it loads!
So the real question is how to do that on production? Should I setup this forwarding rule in iptables? Is that normal approach or hacky solution and there is something "standard" which I am missing? The thing is to add this rule with static worker node will make the cluster completely static. IP eventually change, I can remove/add units to workers and it will stop working. I was thinking about writing script which will obtain this IP address from juju like this:
$ juju status kubernetes-worker/0 --format=json | jq '.machines["7"]."dns-name"'
"10.213.117.136"
and add it to IP-tables, which is more okay-ish solution than hardcoded IP but still I feel it's a tricky and there must be a better way.
As last idea I get to run HAProxy outside of the cluster, directly on the machine and just do forwarding of traffic to all available workers. This might eventually also work. But still I don't know the answer what is the correct solution and what is usually used in this case. Thank you!

So the real question is how to do that on production?
The normal way to do this in a production system is to use a Service.
The simplest case is when you just want your application to be accessible from outside on your node(s). In that case you can use a Type NodePort Service. This would create the iptables rules necessary to forward the traffic from the host IP address to the pod(s) providing the service.
If you have a single node (which is not recommended in production!), you're ready at this point.
If you have multiple nodes in your Kubernetes cluster, all of them would be configured by Kubernetes to provide access to the service (your clients could use any of them to access the service). Though, you'd have to solve the problem of how the clients would know which nodes are available to be contacted...
There are several ways to handle this:
use a protocol understood by the client to publish the currently available IP addresses (for example DNS),
use a floating (failover, virtual, HA) IP address managed by some software on your Kubernetes nodes (for example pacemaker/corosync), and direct the clients to this address,
use an external load-balancer, configured separately, to forward traffic to some of the operating nodes,
use an external load-balancer, configured automatically by Kubernetes using a cloud provider integration script (by using a Type LoadBalancer Service), to forward traffic to some of the operating nodes.

Related

Access Minikube Services from public IP [duplicate]

I know minikube should be used for local only, but i'd like to create a test environment for my applications.
In order to do that, I wish to expose my applications running inside the minikube cluster to external access (from any device on public internet - like a 4G smartphone).
note : I run minikube with --driver=docker
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web8080 NodePort 10.99.39.162 <none> 8080:31613/TCP 3d1h
minikube ip
192.168.49.2
One way to do it is as follows :
firewall-cmd --add-port=8081/tcp
kubectl port-forward --address 0.0.0.0 services/web8080 8081:8080
then I can access it using :
curl localhost:8081 (directly from the machine running the cluster inside a VM)
curl 192.168.x.xx:8081 (from my Mac in same network - this is the private ip of the machine running the cluster inside a VM)
curl 84.xxx.xxx.xxx:8081 (from a phone connected in 4G - this is the public ip exposed by my router)
I don't want to use this solution because kubectl port-forward is weak and need to be run every time the port-forwarding is no longer active.
How can I achieve this ?
(EDITED) - USING LOADBALANCER
when using LoadBalancer type and minikube tunnel, I can expose the service only inside the machine running the cluster.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.111.61.218 10.111.61.218 8080:31831/TCP 3d3h
curl 10.111.61.218:8080 (inside the machine running the cluster) is working
but curl 192.168.x.xx:8080 (from my Mac on same LAN) is not working
Thanks
Minikube as a development tool for a single node Kubernetes cluster provides inherent isolation layer between Kubernetes and the external devices (being specific the inbound traffic to your cluster from LAN/WAN).
Different --drivers are allowing for flexibility when it comes to the place where your Kubernetes cluster will be spawned and how it will behave network wise.
A side note (workaround)!
As your minikube already resides in a VM and uses --driver=docker you could try to use --driver=none (you will be able to curl VM_IP:NodePort from the LAN). It will spawn your Kubernetes cluster directly on the VM.
Consider checking it's documentation as there are some certain limitations/disadvantages:
Minikube.sigs.k8s.io: Docs: Drivers: None
As this setup is already basing on the VM (with unknown hypervisor) and the cluster is intended to be exposed outside of your LAN, I suggest you going with the production-ready setup. This will inherently eliminate the connectivity issues you are facing. Kubernetes cluster will be provisioned directly on a VM and not in the Docker container.
Explaining the --driver=docker used: It will spawn a container on a host system with Kubernetes inside of it. Inside of this container, Docker will be used once again to spawn the necessary Pods to run the Kubernetes cluster.
As for the tools to provision your Kubernetes cluster you will need to chose the option that suits your needs the most. Some of them are the following:
Kubeadm
Kubespray
MicroK8S
After you created your Kubernetes cluster on a VM you could forward the traffic from your router directly to your VM.
Additional resources that you might find useful:
Stackoverflow.com: Questions Expose Kubernetes cluster to the Internet (Virtualbox with minikube)
curl $(minikube ip):$NODE_PORT : Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port.
For you : curl 192.168.49.2:31613
Use nginx reverse-proxy
https://www.zepworks.com/posts/access-minikube-remotely-kvm/
install nginx, then in nginx.conf add this
stream {
server {
listen 8081;
proxy_pass 192.168.49.2:8080;
}
}
restart nginx
One way that I use to get around the fact that the process of kubectl port-forward stops after a while is to create a detach session using tmux following this. With that, I haven't had any problems with the exact same Minikube cluster configuration that you have.

How to expose Minikube cluster to internet

I know minikube should be used for local only, but i'd like to create a test environment for my applications.
In order to do that, I wish to expose my applications running inside the minikube cluster to external access (from any device on public internet - like a 4G smartphone).
note : I run minikube with --driver=docker
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web8080 NodePort 10.99.39.162 <none> 8080:31613/TCP 3d1h
minikube ip
192.168.49.2
One way to do it is as follows :
firewall-cmd --add-port=8081/tcp
kubectl port-forward --address 0.0.0.0 services/web8080 8081:8080
then I can access it using :
curl localhost:8081 (directly from the machine running the cluster inside a VM)
curl 192.168.x.xx:8081 (from my Mac in same network - this is the private ip of the machine running the cluster inside a VM)
curl 84.xxx.xxx.xxx:8081 (from a phone connected in 4G - this is the public ip exposed by my router)
I don't want to use this solution because kubectl port-forward is weak and need to be run every time the port-forwarding is no longer active.
How can I achieve this ?
(EDITED) - USING LOADBALANCER
when using LoadBalancer type and minikube tunnel, I can expose the service only inside the machine running the cluster.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.111.61.218 10.111.61.218 8080:31831/TCP 3d3h
curl 10.111.61.218:8080 (inside the machine running the cluster) is working
but curl 192.168.x.xx:8080 (from my Mac on same LAN) is not working
Thanks
Minikube as a development tool for a single node Kubernetes cluster provides inherent isolation layer between Kubernetes and the external devices (being specific the inbound traffic to your cluster from LAN/WAN).
Different --drivers are allowing for flexibility when it comes to the place where your Kubernetes cluster will be spawned and how it will behave network wise.
A side note (workaround)!
As your minikube already resides in a VM and uses --driver=docker you could try to use --driver=none (you will be able to curl VM_IP:NodePort from the LAN). It will spawn your Kubernetes cluster directly on the VM.
Consider checking it's documentation as there are some certain limitations/disadvantages:
Minikube.sigs.k8s.io: Docs: Drivers: None
As this setup is already basing on the VM (with unknown hypervisor) and the cluster is intended to be exposed outside of your LAN, I suggest you going with the production-ready setup. This will inherently eliminate the connectivity issues you are facing. Kubernetes cluster will be provisioned directly on a VM and not in the Docker container.
Explaining the --driver=docker used: It will spawn a container on a host system with Kubernetes inside of it. Inside of this container, Docker will be used once again to spawn the necessary Pods to run the Kubernetes cluster.
As for the tools to provision your Kubernetes cluster you will need to chose the option that suits your needs the most. Some of them are the following:
Kubeadm
Kubespray
MicroK8S
After you created your Kubernetes cluster on a VM you could forward the traffic from your router directly to your VM.
Additional resources that you might find useful:
Stackoverflow.com: Questions Expose Kubernetes cluster to the Internet (Virtualbox with minikube)
curl $(minikube ip):$NODE_PORT : Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port.
For you : curl 192.168.49.2:31613
Use nginx reverse-proxy
https://www.zepworks.com/posts/access-minikube-remotely-kvm/
install nginx, then in nginx.conf add this
stream {
server {
listen 8081;
proxy_pass 192.168.49.2:8080;
}
}
restart nginx
One way that I use to get around the fact that the process of kubectl port-forward stops after a while is to create a detach session using tmux following this. With that, I haven't had any problems with the exact same Minikube cluster configuration that you have.

CoreDNS only works in one host in kubernete cluster

I have a kubernetes with 3 nodes:
[root#ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
when after I am deployed some pods I found only one nodes azshara-k8s03 could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:
options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
this is the other 2 node /etc/resolv.conf:
nameserver 114.114.114.114
should I keep the same ? what should I do to make the DNS works fine in 3 nodes?
did you try if 114.114.114.114 is actually reachable from your nodes? if not, change it to something that actually is ;-]
also check which resolv.conf your kublets actually use: it is often something else than /etc/resolv.conf: do ps ax |grep kubelet and check the value of --resolv-conf flag and see if the DNSes in that file work correctly.
update:
what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. 100.100.2.136 and 100.100.2.138 are not reachable for me: are they your internal DNSes? if so try to just change /etc/resolv.conf on 2 nodes that don't work to be the same as on the one that works.
First step,your CoreDNS port are listening on port you specify,you can login Pod in other pod and try to using telnet command to make sure the DNS expose port is accesseable(current I am using alpine,centos using yum,ubuntu or debian using apt-get):
apk add busybox-extras
telnet <your coredns server ip> <your coredns listening port>
Second step: login pods on each host machine and make sure the port is accessable in each pod,if telnet port is not accessable,you should fix your cluser net first.

unable to access dns from a kubernetes pod

I have a kubernetes master and node setup in two centos VMs on my Win 10.
I used flannel for CNI and deployed ambassador as an API gateway.
As the ambassador routes did not work, I analysed further to understand that the DNS (ip-10.96.0.10) is not accessible from busybox pod which means that none of the service names can be accessed. Could I get any suggestion please.
1. You should use newest version of Flannel.
Flannel does not setup service IPs but kube-proxy does, you should look at kube-proxy on your nodes and ensure they are not reporting errors.
I'd suggest taking a look at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tabs-pod-install-4 and ensure you have met the requirements stated there.
Similar issue but with Calico plugin you can find here: https://github.com/projectcalico/calico/issues/1798
2. Check if you have open port 8285, flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts.
3. Ambassador includes an integrated diagnostics service to help with troubleshooting, this may be useful for you. By default, this is not exposed to the Internet. To view it, we'll need to get the name of one of the Ambassador pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-3655608000-43x86 1/1 Running 0 2m
ambassador-3655608000-w63zf 1/1 Running 0 2m
Forwarding local port 8877 to one of the pods:
kubectl port-forward ambassador-3655608000-43x86 8877
will then let us view the diagnostics at http://localhost:8877/ambassador/v0/diag/.
First spot should solve your problem, if not, try remainings.
I hope this helps.

How does DNS resolution work on Kubernetes with multiple networks?

I have a 4 node Kubernetes cluster, 1 x controller and 3 x workers. The following shows how they are configured with the versions.
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctrl-1 Ready master 1h v1.11.2 192.168.191.100 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.6.1
turtle-host-01 Ready <none> 1h v1.11.2 192.168.191.53 <none> Ubuntu 18.04.1 LTS 4.15.0-29-generic docker://18.6.1
turtle-host-02 Ready <none> 1h v1.11.2 192.168.191.2 <none> Ubuntu 18.04.1 LTS 4.15.0-34-generic docker://18.6.1
turtle-host-03 Ready <none> 1h v1.11.2 192.168.191.3 <none> Ubuntu 18.04.1 LTS 4.15.0-33-generic docker://18.6.1
Each of the nodes has two network interfaces, for arguments sake eth0 and eth1. eth1 is the network that I want to the cluster to work on. I setup the controller using kubeadm init and passed --api-advertise-address 192.168.191.100. The worker nodes where then joined using this address.
Finally on each node I modified the kubelet service to have --node-ip set so that the layout looks as above.
The cluster appears to be working correctly and I can create pods, deployments etc. However the issue I have is that none of the pods are able to use the kube-dns service for DNS resolution.
This is not a problem with resolution, rather that the machines cannot connect to the DNS service to perform the resolution. For example if I run a busybox container and access it to perform nslookup i get the following:
/ # nslookup www.google.co.uk
nslookup: read: Connection refused
nslookup: write to '10.96.0.10': Connection refused
I have a feeling that this is down to not using the default network and because of that I suspect some Iptables rules are not correct, that being said these are just guesses.
I have tried both the Flannel overlay and now Weave net. The pod CIDR range is 10.32.0.0/16 and the service CIDR is as default.
I have noticed that with Kubernetes 1.11 there are now pods called coredns rather than one kube-dns.
I hope that this is a good place to ask this question. I am sure I am missing something small but vital so if anyone has any ideas that would be most welcome.
Update #1:
I should have said that the nodes are not all in the same place. I have a VPN running between them all and this is the network I want things to communicate over. It is an idea I had to try and have distributed nodes.
Update #2:
I saw another answer on SO (DNS in Kubernetes not working) that suggested kubelet needed to have --cluster-dns and --cluster-domain set. This is indeed the case on my DEV K8s cluster that I have running at home (on one network).
However it is not the case on this cluster and I suspect this is down to a later version. I did add the two settings to all nodes in the cluster, but it did not make things work.
Update #3
The topology of the cluster is as follows.
1 x Controller is in AWS
1 x Worker is in Azure
2 x Worker are physical machines in a colo Data Centre
All machines are connected to each other using ZeroTier VPN on the 192.168.191.0/24 network.
I have not configured any special routing. I agree that this is probably where the issue is, but I am not 100% sure what this routing should be.
WRT to kube-dns and nginx, I have not tainted my controller so nginx is not on the master, not is busybox. nginx and busybox are on workers 1 and 2 respectively.
I have used netcat to test connection to kube-dns and I get the following:
/ # nc -vv 10.96.0.10 53
nc: 10.96.0.10 (10.96.0.10:53): Connection refused
sent 0, rcvd 0
/ # nc -uvv 10.96.0.10 53
10.96.0.10 (10.96.0.10:53) open
The UDP connection does not complete.
I modified my setup so that I could run containers on the controller, so kube-dns, nginx and busybox are all on the controller, and I am able to connect and resolve DNS queries against 10.96.0.10.
So all this does point to routing or IPTables IMHO, I just need to work out what that should be.
Update #4
In response to comments I can confirm the following ping test results.
Master -> Azure Worker (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Azure Worker (VPN) : SUCCESS : Traceroute SUCCESS
Azure Worker -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Azure Worker -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (VPN) : SUCCESS : Traceroute SUCCESS
Colo Worker 1 -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Colo Worker 1 -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Update 5
After running the tests above, it got me thinking about routing and I wondered if it was as simple as providing a route to the controller over the VPN for the service CIDR range (10.96.0.0/12).
So on a host, not included in the cluster, I added a route thus:
route add -net 10.96.0.0/12 gw 192.168.191.100
And I could then resolve DNS using the kube-dns server address:
nslookup www.google.co.uk 10.96.0.10
SO I then added a route, as above, to one of the worker nodes and tried the same. But it is blocked and I do not get a response.
Given that I can resolve DNS over the VPN with the appropriate route from a non-kubernetes machine, I can only think that there is an IPTables rule that needs updating or adding.
I think this is almost there, just one last bit to fix.
I realise this is wrong as it it the kube-proxy should do the DNS resolution on each host. I am leaving it here for information.
Following the instruction at this page, try to run this:
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- ns1.svc.cluster.local
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
and see if a manual configuration works or you have some networking DNS problem.
Sounds like you are running on AWS. I suspect that your AWS security group is not allowing DNS traffic to go through. You can try allowing all traffic to the Security Group(s) where all your master and nodes are, to see if that's the problem.
You can also check that all your masters and nodes are allowing routing:
cat /proc/sys/net/ipv4/ip_forward
If not
echo 1 > /proc/sys/net/ipv4/ip_forward
Hope it helps.