Change kubernetes --service-cluster-ip-range after the cluster is initialized - kubernetes

After initializing kubernetes multimaster cluster I realized that my --service-cluster-ip-range overlaps with actual hosts subnet. A lot of services IPs are overlaping with actual kube node hosts IPs. Now because of that I see a lot of issues in kubedns pods like below:
getsockopt: no route to host
My LAN is: 10.100.0.0/24
Kube service subnet is: 10.96.0.0/12
Now I want to change this in the kube-api pods yamls after removing all the services, but it won't allow me saying that specific section is not a subject o be changed. Is there a way to fix this?

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

How to configure Rancher so that internal DNS resolves to custom address

I am new to Rancher 2.0 and I have a set of pods running in a 2 node cluster. Some of the pods need reach an internal DNS entry that will resolve to a port on some of the other pods with a custom address like http://mypod:443 or wss://mypod:443. How can I go about achieving this?

Can't access kubernetes service which have externalTrafficPolicy as "Local"

I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.

Kubernetes, How to enable inter-pod DNS within a same Deployment?

I am new to Kubernetes, and I am trying to make inter-pod communication over DNS to work.
Pods in My k8s are spawned using Deployments. My problem is all the Pods report its hostname to Zookeeper, and pods use those hostnames found in Zookeeper to ping the other peers. It always fail because the peer's hostnames are unresolvable between pods.
The only solution now is to manually add each pod's hostname to peer's /etc/hosts file. But this method would not endure to work for large clusters.
If there is a DNS solution for inter-pod communication, that keeps a record of any newly generated pods, and delete dead pods, will be great.
Thanks in advance.
One solution I had found was to add hostname and subdomain under spec->template->spec-> , then the communication over hostnames between each pod is successful.
However, this solution is fairly dumb, because I cannot set the replicas for each Deployment to more than 1, or I will get more than 1 pod with same hostname in the cluster. If I have 10 slave nodes with same function in a cluster, I will need to create 10 Deployments.
Any better solutions?
You need to use a service definition pointing to your pods
https://kubernetes.io/docs/concepts/services-networking/service/
With that you have a balanced proxy to control the inter-pod communications and the internal DNS on Kubernetes takes care of that service instead of each pod no matter the state of the pod.
If that simples solution didn't fit your needs you can substitute kubedns as the default internal DNS by using coreDNS.
https://coredns.io/

Kubernetes with flannel cannot establish connection between 2 pods on different nodes

We have started a cluser with /16 subnet, and flannel as our networking overlay. The pods are getting created on the 2 nodes running sock-shop demo application. But what we are noticing is that pods in different nodes cannot establish connection between them. We do see the routing entries for the pods using flannel.1 interface. Even ping fails. Any pointers to debug information would be appreciated.
You could try to check if the docker bridge ip (--bip= option) is in the same network as flannel interface.
Also you can check ETCD network settings in /coreos.com/network/ with etcdctl command.