I'm writing some scripts that check the system to make sure of some cluster characteristics. Things running on private IP address spaces, etc. These checks are just a manual step when setting up a cluster, and used just for sanity checking.
They'll be run on each node, but I'd like a set of them to run when on the master node. Is there a bash, curl, kubectl, or another command that has information indicating the current node is a master node?
The master(s) usually has the 'master' role associated with it. For example:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready master 78d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
It also has a label node-role.kubernetes.io/master associated with it. For example:
$ kubectl get node ip-x-x-x-x.us-west-2.compute.internal -o=yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: x.x.x.x/20
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-07-23T21:10:22Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: t3.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: us-west-2
failure-domain.beta.kubernetes.io/zone: us-west-2c
kubernetes.io/hostname: ip-x-x-x-x.us-west-2.compute.internal
node-role.kubernetes.io/master: ""
Some more ways:
$ kubectl cluster-info
Kubernetes master is running at https://node1.example.com:8443
...
You can use kubectl with label selector:
$ kubectl get nodes -l node-role.kubernetes.io/master=true
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 1d v1.10.5
node2.example.com Ready master 1d v1.10.5
And you can get specific data via jsonpath, e.g master IPs/hostnames:
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="InternalIP")].address}'
192.168.168.197 192.168.168.198
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="Hostname")].address}'
node1.example.com node2.example.com
Related
I am trying to understand the network configuration concepts of Pod in kubernetes. I may have misunderstood the concept and so looking for some suggestion and help.
I Following is my Pod spec where I have set the dnspolicy=true and dnsPolicy to ClusterFirstWithHostNet
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: nginx
name: nginx
I can see that my pod and host now have the same IP.
**k get pod -o wide**
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 ContainerCreating 0 13s 172.25.0.31 controlplane <none> <none>
**k get node -o wide**
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready control-plane,master 5m30s v1.23.3+k3s1 172.25.0.31 <none> Alpine Linux v3.15 5.4.0-1069-gcp containerd://1.5.9-k3s1
So my question is why is that when I check the dns config on host and on pod they give a different results. My understanding is that they both should show the same results ?
controlplane ~ ➜ cat /etc/resolv.conf
nameserver 172.25.0.1
options ndots:0
From POD
controlplane ~ ➜ k exec -it nginx -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
My understanding is that they both should show the same results ?
Because you set dnsPolicy: ClusterFirstWithHostNet which uses coredns (10.43.0.10) in the cluster. If you set to dnsPolicy: Default the pod will use 172.25.0.1 same as your host, but you will not be able to resolved services name deployed in the cluster.
How would I display available schedulers in my cluster in order to use non default one using the schedulerName field?
Any link to a document describing how to "install" and use a custom scheduler is highly appreciated :)
Thx in advance
Schedulers can be found among your kube-system pods. You can then filter the output to your needs with kube-scheduler as the search key:
➜ ~ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9wfkp 0/1 Completed 15 264d
coredns-6955765f44-jmz9j 1/1 Running 16 264d
etcd-acid-fuji 1/1 Running 17 264d
kube-apiserver-acid-fuji 1/1 Running 6 36d
kube-controller-manager-acid-fuji 1/1 Running 21 264d
kube-proxy-hs2qb 1/1 Running 0 177d
kube-scheduler-acid-fuji 1/1 Running 21 264d
You can retrieve the yaml file with:
➜ ~ kubectl get pods -n kube-system <scheduler pod name> -oyaml
If you bootstrapped your cluster with Kubeadm you may also find the yaml files in the /etc/kubernetes/manifests:
➜ manifests sudo cat /etc/kubernetes/manifests/kube-scheduler.yaml
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.17.6
imagePullPolicy: IfNotPresent
---------
The location for minikube is similar but you do have to login in the minikube's virtual machine first with minikube ssh.
For more reading please have a look how to configure multiple schedulers and how to write custom schedulers.
You can try this one:
kubectl get pods --all-namespaces | grep scheduler
How can one inquire the Kubernetes pod and service subnets in use (e.g. 10.244.0.0/16 and 10.96.0.0/12 respectively) from inside a Kubernetes cluster in a portable and simple way?
For instance, kubectl get cm -n kube-system kubeadm-config -o yaml reports podSubnet and serviceSubnet. But this is not fully portable because a cluster may have been set up by another means than kubeadm.
kubectl get cm -n kube-system kube-proxy -o yaml reports clusterCIDR (i.e. pod subnet) and kubectl get pod -n kube-system kube-apiserver-master1 -o yaml reports the value
passed as command-line option --service-cluster-ip-range to kube-apiserver (i.e. service subnet). master1 stands for the name of any control plane node. But this seems a bit complex.
Is there a better way available e.g. with the Kubernetes 1.17 API?
I don't think it would be possible to obtain what you want in a portable and simple way.
If you don't specify Cidr's parameters it will assign default one.
As you have many ways to run kubernetes as unmanaged clusters like kubeadm, minikbue, k3s, micork8s or managed like Cloud providers (GKE, Azure, AWS) it's hard to find one way to list all cidrs in all environments. Another obstacle can be versions of Kubernetes or CNI.
In Kubernetes 1.17 Release notes you can find information that
Deprecate the default service IP CIDR. The previous default was 10.0.0.0/24 which will be removed in 6 months/2 releases. Cluster admins must specify their own desired value, by using --service-cluster-ip-range on kube-apiserver.
As example of Kubeadm: $ kubeadm init --pod-network-cidr 10.100.0.0/12 --service-cidr 10.99.0.0/12
There are a few ways to get this pod and service-cidr:
$ kubectl cluster-info dump | grep -E '(service-cluster-ip-range|cluster-cidr)'
"--service-cluster-ip-range=10.99.0.0/12",
"--cluster-cidr=10.100.0.0/12",
$ kubeadm config view | grep Subnet
podSubnet: 10.100.0.0/12
serviceSubnet: 10.99.0.0/12
But if you will check all pods in this cluster, some pods are starting with 192.168.190.X or 192.168.137.X
$ kubectl get pods -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx 1/1 Running 0 62m 192.168.190.129 kubeadm-worker <none> <none>
kube-system calico-kube-controllers-77c5fc8d7f-9n6m5 1/1 Running 0 118m 192.168.137.66 kubeadm-master <none> <none>
kube-system calico-node-2kx2v 1/1 Running 0 117m 10.128.0.4 kubeadm-worker <none> <none>
kube-system calico-node-8xqd9 1/1 Running 0 118m 10.128.0.3 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-sgmkw 1/1 Running 0 120m 192.168.137.65 kubeadm-master <none> <none>
kube-system coredns-66bff467f8-t84ht 1/1 Running 0 120m 192.168.137.67 kubeadm-master <none> <none>
If you will describe any CNI pods you can find another CIDRs:
CALICO_IPV4POOL_CIDR: 192.168.0.0/16
For GKE example you will have:
node CIDRs
$ kubectl describe node | grep CIDRs
PodCIDRs: 10.52.1.0/24
PodCIDRs: 10.52.0.0/24
PodCIDRs: 10.52.2.0/24
$ gcloud container clusters describe cluster-2 --zone=europe-west2-b | grep Cidr
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4Cidr: 10.52.0.0/14
clusterIpv4CidrBlock: 10.52.0.0/14
servicesIpv4Cidr: 10.116.0.0/20
servicesIpv4CidrBlock: 10.116.0.0/20
podIpv4CidrSize: 24
servicesIpv4Cidr: 10.116.0.0/20
Honestly I don't think there is an easy and portable way to list all podCidrs and serviceCidrs in one simple command.
I would like to reserve some worker nodes for a namespace. I see the notes of stackflow and medium
How to assign a namespace to certain nodes?
https://medium.com/#alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076
I understand we can use taint and nodeselector to achieve that.
My question is if people get to know the details of nodeselector or taint, how can we prevent them to deploy pods into these dedicated worker nodes.
thank you
To accomplish what you need, basically you have to use taint.
Let's suppose you have a Kubernetes cluster with one Master and 2 Worker nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
knode01 Ready <none> 8d v1.16.2
knode02 Ready <none> 8d v1.16.2
kubemaster Ready master 8d v1.16.2
As example I'll setup knode01 as Prod and knode02 as Dev.
$ kubectl taint nodes knode01 key=prod:NoSchedule
$ kubectl taint nodes knode02 key=dev:NoSchedule
To run a pod into these nodes, we have to specify a toleration in spec session on you yaml file:
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
tolerations:
- key: "key"
operator: "Equal"
value: "dev"
effect: "NoSchedule"
This pod (pod1) will always run in knode02 because it's setup as dev. If we want to run it on prod, our tolerations should look like that:
tolerations:
- key: "key"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
Since we have only 2 nodes and both are specified to run only prod or dev, if we try to run a pod without specifying tolerations, the pod will enter on a pending state:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod0 1/1 Running 0 21m 192.168.25.156 knode01 <none> <none>
pod1 1/1 Running 0 20m 192.168.32.83 knode02 <none> <none>
pod2 1/1 Running 0 18m 192.168.25.157 knode01 <none> <none>
pod3 1/1 Running 0 17m 192.168.32.84 knode02 <none> <none>
shell-demo 0/1 Pending 0 16m <none> <none> <none> <none>
To remove a taint:
$ kubectl taint nodes knode02 key:NoSchedule-
This is how it can be done
Add new label, say, ns=reserved, label to a specific worker node
Add taint and tolerations to target specific pods on to this worker node
You need to define RBAC roles and role bindings in that namespace to control what other users can do
I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.
Following is the output of kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
In the above output, why do some pods have an IP assigned in the 192.168.0.0/24 subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)
This is the config.yml I used to initialize the cluster using kubeadm init --config=config.yml
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Result of kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
The command I ran was kubectl exec -ti dnstools -- curl 10.96.0.10:53
EDIT:
I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.
Output of kubectl exec -ti dnstools -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
Output of kubectl exec dnstools cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
Result of kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:
Edit host files on master and worker to include the IP's and hostnames of the nodes
Disabled swap using swapoff -a and disabled swap permanantly by editing /etc/fstab
Install docker prerequisites using apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Added Docker GPG key using curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Added Docker repo using add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Docker using apt-get update -y; -get install docker-ce -y
Install Kubernetes prerequisites using curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Added Kubernetes repo using echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update repo and install Kubernetes components using apt-get update -y; apt-get install kubelet kubeadm kubectl -y
Configure master node:
kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16
Copy Kube config to $HOME: mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installed Calico using kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
On node:
On the node I did the kubeadm join command using the command printed out from kubeadm token create --print-join-command on the master
The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.
A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.
You can't access DNS (port 53) with HTTP using curl. You can use dig to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.
You should check on the same namespace
Currently, you are in default namespace and curl to other kube-system namespace.
You should check in the same namespace, I think it works.
On some cases the local host that Elasticsearch publishes is not routable/accessible from other hosts. On these cases you will have to configure network.publish_host in the yml config file, in order for Elasticsearch to use and publish the right address.
Try configuring network.publish_host to the right public address.
See more here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings
note that control plane components like api server, etcd that runs on master node are bound to host network. and hence you see the ip address of the master server.
On the other hand, the apps that you deployed are going to get the ips from the pod subnet range. those vary from cluster node ip's
Try below steps to test dns working or not
deploy nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
kuebctl create -f nginx.yaml
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local