Kubernetes pods can't ping each other using ClusterIP - kubernetes

I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.
Following is the output of kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
In the above output, why do some pods have an IP assigned in the 192.168.0.0/24 subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)
This is the config.yml I used to initialize the cluster using kubeadm init --config=config.yml
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Result of kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
The command I ran was kubectl exec -ti dnstools -- curl 10.96.0.10:53
EDIT:
I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.
Output of kubectl exec -ti dnstools -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
Output of kubectl exec dnstools cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
Result of kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:
Edit host files on master and worker to include the IP's and hostnames of the nodes
Disabled swap using swapoff -a and disabled swap permanantly by editing /etc/fstab
Install docker prerequisites using apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Added Docker GPG key using curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Added Docker repo using add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Docker using apt-get update -y; -get install docker-ce -y
Install Kubernetes prerequisites using curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Added Kubernetes repo using echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update repo and install Kubernetes components using apt-get update -y; apt-get install kubelet kubeadm kubectl -y
Configure master node:
kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16
Copy Kube config to $HOME: mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installed Calico using kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
On node:
On the node I did the kubeadm join command using the command printed out from kubeadm token create --print-join-command on the master

The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.
A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.
You can't access DNS (port 53) with HTTP using curl. You can use dig to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.

You should check on the same namespace
Currently, you are in default namespace and curl to other kube-system namespace.
You should check in the same namespace, I think it works.

On some cases the local host that Elasticsearch publishes is not routable/accessible from other hosts. On these cases you will have to configure network.publish_host in the yml config file, in order for Elasticsearch to use and publish the right address.
Try configuring network.publish_host to the right public address.
See more here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings

note that control plane components like api server, etcd that runs on master node are bound to host network. and hence you see the ip address of the master server.
On the other hand, the apps that you deployed are going to get the ips from the pod subnet range. those vary from cluster node ip's
Try below steps to test dns working or not
deploy nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
kuebctl create -f nginx.yaml
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local

Related

K8s no internet connection inside the container

I installed a clean K8s cluster in virtual machines (Debian 10). After the installation and the integration into my landscape, I checked the connectivity inside my testing alpine image. As result the connection of outgoing traffic not working and no information was inside the coreDNS log. I used the workaround on my build image to overwrite my /etc/resolv.conf and replace the DNS entries (e.g. set 1.1.1.1 as Nameserver). After that temporary "hack" the connection to the internet works perfectly. But the workaround is not a long term solution and I want to use the official way. Inside the documentation of K8s coreDNS, I found the forward section and I interpret the flag like an option, to forward the inquiry to the predefined local resolver. I think the forwarding to the local resolv.conf and the resolve process works not correctly. Can anyone help me to solve that issue?
Basic setup:
K8s version: 1.19.0
K8s setup: 1 master + 2 worker nodes
Based on: Debian 10 VM's
CNI: Flannel
Status of CoreDNS Pods
kube-system coredns-xxxx 1/1 Running 1 26h
kube-system coredns-yyyy 1/1 Running 1 26h
CoreDNS Log:
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
CoreDNS config:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: ""
name: coredns
namespace: kube-system
resourceVersion: "219"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: xxx
Ouput alpine image:
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
Output of pods resolv.conf
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
Output of host resolv.conf
cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid
Output of host /run/flannel/subnet.env
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Output of kubectl get pods -n kube-system -o wide
coredns-54694b8f47-4sm4t 1/1 Running 0 14d 10.244.1.48 xxx3-node-1 <none> <none>
coredns-54694b8f47-6c7zh 1/1 Running 0 14d 10.244.0.43 xxx2-master <none> <none>
coredns-54694b8f47-lcthf 1/1 Running 0 14d 10.244.2.88 xxx4-node-2 <none> <none>
etcd-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-apiserver-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-controller-manager-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-4w8zl 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-flannel-ds-amd64-w7m44 1/1 Running 7 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-flannel-ds-amd64-xztqm 1/1 Running 6 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-dfs85 1/1 Running 4 28d xxx.xx.xx.xxx xxx4-node-2 <none> <none>
kube-proxy-m4hl2 1/1 Running 4 28d xxx.xx.xx.xxx xxx3-node-1 <none> <none>
kube-proxy-s7p4s 1/1 Running 8 28d xxx.xx.xx.xxx xxx2-master <none> <none>
kube-scheduler-xxx2-master 1/1 Running 7 27d xxx.xx.xx.xxx xxx2-master <none> <none>
Problem:
The (two) coreDNS pods were only deployed on the master node. You can check the settings with this command.
kubectl get pods -n kube-system -o wide | grep coredns
Solution:
I could solve the problem by scaling up the coreDNS pods and edit the deployment configuration. The following commands must be executed.
kubectl edit deployment coredns -n kube-system
Set replicas value to node quantity e.g. 3
kubectl patch deployment coredns -n kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"force-update/updated-at\":\"$(date +%s)\"}}}}}"
kubectl get pods -n kube-system -o wide | grep coredns
Source
https://blog.dbi-services.com/kubernetes-dns-resolution-using-coredns-force-update-deployment/
Hint
If you still have a problem with your coreDNS and your DNS resolution works sporadically, take a look at this post.

Hostname not registered in the namespace

I have set Kuberbetes cluster via Rancher. All looks fine till I have decided to deploy Helm Chart app to it (bitnami/wordpress).
Pods and services in the wordpress namespace:
> kubectl get pods,svc -owide --namespace=wordpress
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/wordpress-6647794f9b-4mmxd 0/1 Running 20 104m 10.42.0.19 dev-app <none> <none>
pod/wordpress-mariadb-0 1/1 Running 1 26h 10.42.0.14 dev-app <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/wordpress LoadBalancer 10.43.91.13 <pending> 80:30158/TCP,443:30453/TCP 26h app.kubernetes.io/instance=wordpress,app.kubernetes.io/name=wordpress,io.cattle.field/appId=wordpress
service/wordpress-mariadb ClusterIP 10.43.178.123 <none> 3306/TCP 26h app=mariadb,component=master,io.cattle.field/appId=wordpress,release=wordpress
Then I have tried to verify connections from the Wordpress CMS to the MariaDB:
On the MariaDB pod:
I have no name!#wordpress-mariadb-0:/$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.42.0.14 wordpress-mariadb-0.wordpress-mariadb.wordpress.svc.cluster.local wordpress-mariadb-0
No clue why -0 was added to the host name but
I have no name!#wordpress-mariadb-0:/$ mysql -h wordpress-mariadb-0.wordpress-mariadb.wordpress.svc.cluster.local -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 48253
Server version: 10.3.22-MariaDB Source distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
works fine from the MariaDB pod.
Than on the Wordpress pod:
I have no name!#wordpress-6647794f9b-4mmxd:/$ mysql -h wordpress-mariadb-0.wordpress-mariadb.wordpress.svc.cluster.local -u root -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host 'wordpress-mariadb-0.wordpress-mariadb.wordpress.svc.cluster.local' (-3)
for some reason hostname is not registered for that namespace.
Any clue?
PS
From the wordpress pod:
I have no name!#wordpress-6647794f9b-4mmxd:/$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.42.0.19 wordpress-6647794f9b-4mmxd
# Entries added by HostAliases.
127.0.0.1 status.localhost
and
I have no name!#wordpress-6647794f9b-4mmxd:/$ cat /etc/resolv.conf
nameserver 10.43.0.10
search wordpress.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
PS2
> kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
canal-8bf2l 2/2 Running 0 16d
canal-g782s 2/2 Running 2 16d
canal-vq474 2/2 Running 0 16d
coredns-849545576b-gcf7p 1/1 Running 0 16d
coredns-849545576b-vtqpw 1/1 Running 0 3d20h
coredns-autoscaler-84bf756579-v594n 1/1 Running 0 3d20h
metrics-server-697746ff48-rtw2h 1/1 Running 1 16d
rke-coredns-addon-deploy-job-2sjlv 0/1 Completed 0 3d20h
rke-ingress-controller-deploy-job-9q4c2 0/1 Completed 0 3d20h
rke-metrics-addon-deploy-job-cv42h 0/1 Completed 0 3d20h
rke-network-plugin-deploy-job-4pddn 0/1 Completed 0 3d20h
Mariadb is deployed as StatefulSet.Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname is $(statefulset name)-$(ordinal). This explains why -0 in podname and hostname.
Ideally you should be using the Headless service exposing the mysql statefulset to access mysql from wordpress. Below is an example of service, you don't need to create it because it already exists with name
wordpress-mariadb
apiVersion: v1
kind: Service
metadata:
name: wordpress-mariadb
namespace: wordpress
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
In wordpress pod running in same namespace use wordpress-mariadb to access mariadb
Official guide on deploying wordpress with mysql
For some reason one of the nodes with CoreDNS pod (I had two of them) had a problem with providing data for the kube-dns service.
Rebuilding that node solved my problem.

LoadBalancer using Metallb on bare metal RPI cluster not working after installation

I'm fiddling around with my RPI cluster that I've setup using Kubeadm and I want to make LoadBalancers able to work on the cluster.
The IPs for the nodes are static and set to the range of 192.168.1.100-192.168.1.103 for the master and worker nodes.
I've installed the Metallb using the offical site docs.
This is my configmap for Metallb:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.2.240-192.168.2.250
From what I've understood you're supposed to give the loadbalancers a range outside of the DHCP range of the router? But even changing the addresses range to something like 192.168.1.200-192.168.1.240 doesn't change the results I'm getting.
The DHCP settings for my router.
K8s node info
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready master 5d15h v1.17.4 192.168.1.100 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7l+ docker://19.3.8
k8s-worker-01 Ready worker 5d15h v1.17.4 192.168.1.101 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
k8s-worker-02 Ready worker 4d14h v1.17.4 192.168.1.102 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
k8s-worker-03 Ready worker 4d14h v1.17.4 192.168.1.103 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
Then I try to setup a small nginx deployment
kubectl run nginx --image=nginx
kubectl expose deploy nginx --port=80 --type=LoadBalancer
Running kubectl get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d15h
nginx LoadBalancer 10.96.53.50 <pending> 80:31253/TCP 4s
This is where I'm stuck now. I don't seem to be able to get a LoadBalancer to work with this setup and I'm not really sure where I'm going wrong.
Metallb output
NAME READY STATUS RESTARTS AGE
pod/controller-65895b47d4-q25b8 1/1 Running 0 68m
pod/speaker-bnkpq 1/1 Running 0 68m
pod/speaker-d56zg 1/1 Running 0 68m
pod/speaker-h9vpr 1/1 Running 0 68m
pod/speaker-qsl6f 1/1 Running 0 68m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 4 4 4 4 4 beta.kubernetes.io/os=linux 68m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 68m
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-65895b47d4 1 1 1 68m
MetalLB layer2 mode doesn't receive broadcast packets unless promiscuous mode is enabled.
Try below
sudo ifconfig wlan0 promisc
Add a Crontab to run each start up so you will not lose this change on restart.
1. sudo crontab -e
2. Add this line at the end of the file
#reboot sudo ifconfig wlan0 promisc
In my setup with MetalLB, Cilium and HAProxy Ingress Controller I'd to change externalTrafficPolicy from Local to Cluster
kubectl --namespace ingress-controller patch svc haproxy-ingress \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

kubectl proxy not working on Ubuntu LTS 18.04

I've installed Kubernetes on ubuntu 18.04 using this article. Everything is working fine and then I tried to install Kubernetes dashboard with these instructions.
Now when I am trying to run kubectl proxy then the dashboard is not cumming up and it gives following error message in the browser when trying to access it using default kubernetes-dashboard URL.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
"reason": "ServiceUnavailable",
"code": 503
}
Following commands give this output where kubernetes-dashboard shows status as CrashLoopBackOff
$> kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default amazing-app-rs-59jt9 1/1 Running 5 23d
default amazing-app-rs-k6fg5 1/1 Running 5 23d
default amazing-app-rs-qd767 1/1 Running 5 23d
default amazingapp-one-deployment-57dddd6fb7-xdxlp 1/1 Running 5 23d
default nginx-86c57db685-vwfzf 1/1 Running 4 22d
kube-system coredns-6955765f44-nqphx 0/1 Running 14 25d
kube-system coredns-6955765f44-psdv4 0/1 Running 14 25d
kube-system etcd-master-node 1/1 Running 8 25d
kube-system kube-apiserver-master-node 1/1 Running 42 25d
kube-system kube-controller-manager-master-node 1/1 Running 11 25d
kube-system kube-flannel-ds-amd64-95lvl 1/1 Running 8 25d
kube-system kube-proxy-qcpqm 1/1 Running 8 25d
kube-system kube-scheduler-master-node 1/1 Running 11 25d
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-kvz5d 1/1 Running 0 41m
kubernetes-dashboard kubernetes-dashboard-566f567dc7-w2sbk 0/1 CrashLoopBackOff 12 41m
$> kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ---------- <none> 443/TCP 25d
default nginx NodePort ---------- <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ---------- <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ---------- <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ---------- <none> 443/TCP 24d
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP ====== <none> 443/TCP 25d
default nginx NodePort ====== <none> 80:32188/TCP 22d
kube-system kube-dns ClusterIP ====== <none> 53/UDP,53/TCP,9153/TCP 25d
kubernetes-dashboard dashboard-metrics-scraper ClusterIP ====== <none> 8000/TCP 24d
kubernetes-dashboard kubernetes-dashboard ClusterIP ====== <none> 443/TCP 24d
$ kubectl get events -n kubernetes-dashboard
LAST SEEN TYPE REASON OBJECT MESSAGE
24m Normal Pulling pod/kubernetes-dashboard-566f567dc7-w2sbk Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s Warning BackOff pod/kubernetes-dashboard-566f567dc7-w2sbk Back-off restarting failed container
$ kubectl describe services kubernetes-dashboard -n kubernetes-dashboard
Name: kubernetes-dashboard
Namespace: kubernetes-dashboard
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.241.62
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints:
Session Affinity: None
Events: <none>
$ kubectl logs kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard
> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
>
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
> /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
> /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212
Any suggestions to fix this? Thanks in advance.
I noticed that the guide You used to install kubernetes cluster is missing one important part.
According to kubernetes documentation:
For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here.
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here .
Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
For more information about flannel, see the CoreOS flannel repository on GitHub .
To fix this:
I suggest using the command:
sysctl net.bridge.bridge-nf-call-iptables=1
And then reinstall flannel:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Update: After verifying the the /proc/sys/net/bridge/bridge-nf-call-iptables value is 1 by default ubuntu-18-04-lts. So issue here is You need to access the dashboard locally.
If You are connected to Your master node via ssh. It could be possible to use -X flag with ssh in order to launch we browser via ForwardX11. Fortunately ubuntu-18-04-lts has it turned on by default.
ssh -X server
Then install local web browser like chromium.
sudo apt-get install chromium-browser
chromium-browser
And finally access the dashboard locally from node.
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Hope it helps.

How to change the default nodeport range on Mac (docker-desktop)?

How to change the default nodeport range on Mac (docker-desktop)?
I'd like to change the default nodeport range on Mac. Is it possible? I'm glad to have found this article: http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range. Since I can't find /etc/kubernetes/manifests/kube-apiserver.yaml in my environment, I tried to achieve what I want to do by running sudo kubectl edit pod kube-apiserver-docker-desktop --namespace=kube-system and add the parameter --service-node-port-range=443-22000. But when I tried to save it, I got the following error:
# pods "kube-apiserver-docker-desktop" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
(I get the same error even if I don't touch port 443.) Can someone please share his/her thoughts or experience? Thanks!
Append:
skwok-mbp:kubernetes skwok$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
docker compose 1/1 1 1 15d
docker compose-api 1/1 1 1 15d
ingress-nginx nginx-ingress-controller 1/1 1 1 37m
kube-system coredns 2/2 2 2 15d
skwok-mbp:kubernetes skwok$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default fortune-configmap-volume 2/2 Running 4 14d
default kubia-2qzmm 1/1 Running 2 15d
docker compose-6c67d745f6-qqmpb 1/1 Running 2 15d
docker compose-api-57ff65b8c7-g8884 1/1 Running 4 15d
ingress-nginx nginx-ingress-controller-756f65dd87-sq6lt 1/1 Running 0 37m
kube-system coredns-fb8b8dccf-jn8cm 1/1 Running 6 15d
kube-system coredns-fb8b8dccf-t6qhs 1/1 Running 6 15d
kube-system etcd-docker-desktop 1/1 Running 2 15d
kube-system kube-apiserver-docker-desktop 1/1 Running 2 15d
kube-system kube-controller-manager-docker-desktop 1/1 Running 29 15d
kube-system kube-proxy-6nzqx 1/1 Running 2 15d
kube-system kube-scheduler-docker-desktop 1/1 Running 30 15d
Update: The example from the documentation shows a way to adjust apiserver parameters during Minikube start:
minikube start --extra-config=apiserver.service-node-port-range=1-65535
--extra-config: A set of key=value pairs that describe configuration that may be passed to different components. The key should be '.' separated, and the first part before the dot is the component to apply the configuration to. Valid components are: kubelet, apiserver, controller-manager, etcd, proxy, scheduler. link
The list of available options could be found in CLI documentation
Another way to change kube-apiserver parameters for Docker-for-desktop on Mac:
login to Docker VM:
$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
#(you can also use privileged container for the same purpose)
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
#or
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
# as suggested here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/5
# in case of minikube use the following command:
$ minikube ssh
Edit kube-apiserver.yaml (it's one of static pods, they are created by kubelet using files in /etc/kubernetes/manifests)
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
# for minikube
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following line to the pod spec:
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.65.3
...
- --service-node-port-range=443-22000 # <-- add this line
...
Save and exit. Pod kube-apiserver will be restarted with new parameters.
Exit Docker VM (for screen: Ctrl-a,k , for container: Ctrl-d )
Check the results:
$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
Create simple deployment and expose it with service:
$ kubectl run nginx1 --image=nginx --replicas=2
$ kubectl expose deployment nginx1 --port 80 --type=NodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx1 NodePort 10.99.173.234 <none> 80:14966/TCP 5s
As you can see NodePort was chosen from the new range.
There are other ways to expose your container: HostNetwork, HostPort, MetalLB
You need to add the correct security context for that purpose, check out how the ingress addon in minikube works, for example.
...
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
...
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL