I am trying to understand the network configuration concepts of Pod in kubernetes. I may have misunderstood the concept and so looking for some suggestion and help.
I Following is my Pod spec where I have set the dnspolicy=true and dnsPolicy to ClusterFirstWithHostNet
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- image: nginx
name: nginx
I can see that my pod and host now have the same IP.
**k get pod -o wide**
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 ContainerCreating 0 13s 172.25.0.31 controlplane <none> <none>
**k get node -o wide**
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
controlplane Ready control-plane,master 5m30s v1.23.3+k3s1 172.25.0.31 <none> Alpine Linux v3.15 5.4.0-1069-gcp containerd://1.5.9-k3s1
So my question is why is that when I check the dns config on host and on pod they give a different results. My understanding is that they both should show the same results ?
controlplane ~ ➜ cat /etc/resolv.conf
nameserver 172.25.0.1
options ndots:0
From POD
controlplane ~ ➜ k exec -it nginx -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
My understanding is that they both should show the same results ?
Because you set dnsPolicy: ClusterFirstWithHostNet which uses coredns (10.43.0.10) in the cluster. If you set to dnsPolicy: Default the pod will use 172.25.0.1 same as your host, but you will not be able to resolved services name deployed in the cluster.
Related
I have a Jupyter notebook setup in the jupyter namespace on a kubernetes cluster, and Jupyter Enterprise Gateway setup in the enterprise-gateway namespace as a Service in the same cluster.
If I configure the notebook to connect to the enterprise-gateway service using the clusterIP it works fine.
--gateway-url=http://172.20.186.249:8888
but if I switch to using the service domain name the notebook receives a 503 Connection Refused error
--gateway-url=http://enterprise-gateway.enterprise-gateway.svc.cluster.local:8888
When I use busybox check to check the kubernetes dns, the domain resolves as expected.
kubectl -n default exec -ti busybox nslookup enterprise-gateway.enterprise-gateway
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Server: 172.20.0.10
Address 1: 172.20.0.10 kube-dns.kube-system.svc.cluster.local
Name: enterprise-gateway.enterprise-gateway
Address 1: 172.20.186.249 enterprise-gateway.enterprise-gateway.svc.cluster.local
How do I get the domain name to work?
The Service config for the JEG looks like this...
kubectl describe svc enterprise-gateway --namespace enterprise-gateway
Name: enterprise-gateway
Namespace: enterprise-gateway
Labels: app=enterprise-gateway
app.kubernetes.io/managed-by=Helm
chart=enterprise-gateway-2.6.0
component=enterprise-gateway
heritage=Helm
release=enterprise-gateway
Annotations: meta.helm.sh/release-name: enterprise-gateway
meta.helm.sh/release-namespace: enterprise-gateway
Selector: app=enterprise-gateway
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.250.15
IPs: 172.20.250.15
Port: http 8888/TCP
TargetPort: 8888/TCP
NodePort: http 31366/TCP
Endpoints: 10.1.16.136:8888,10.1.2.228:8888,10.1.30.90:8888
Port: response 8877/TCP
TargetPort: 8877/TCP
NodePort: response 31201/TCP
Endpoints: 10.1.16.136:8877,10.1.2.228:8877,10.1.30.90:8877
Session Affinity: ClientIP
External Traffic Policy: Cluster
Events: <none>
Ok, i dont know where to start i have a bunch of findings. I will start with the eye catcher one, i have a working test project i can share later on and i have to elaborate more in this answer if needed.
Step1
1- I see a mismatch on your IPs. The DNS lookup did not resolved the service DNS to the correct IP.
Address 1: 172.20.186.249 is different than IP: 172.20.250.15
To debug DNS:
kubectl exec "YOURPODNAME" cat /etc/resolv.conf
Verify that a search path and a name server are set up correctly
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
check if the kubedns/coredns pods are running
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
....
kube-dns-86f4d74b45-2qkfd 3/3 Running 232 133d
kube-proxy-b2frq 1/1 Running 0 15m
...
If the pod is running, there might be something wrong with the global DNS service
kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP
You might also need to check whether DNS endpoints are exposed:
kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.17.0.5:53,172.17.0.5:53 133d
These debugging actions will usually indicate the problem with your DNS configuration, or it will simply show you that a DNS add-on should be enabled in your cluster configuration.
Step 2
When using busybox to check the kubernetes dns
This seems incorrect when looking Address 1: 172.20.186.249 im expecting to get an IP 10.X.X.X
Install dnsutils on the pod as pointed below
kubectl exec --stdin --tty "YOURPODNAME" -- apt update && sudo
apt-get -y install dnsutils
kubectl exec -it "YOURPODNAME" -- /bin/bash
Inside the pod and again (weird) run apt-get install dnsutils
Stay inside the pod and run nslookup "YOURSERVICENAME" you will get
an IP and a Name(DNS).
Check this IP since it needs to match with the IP of the service description.
kubectl describe svc "YOURSERVICENAME", the IP should be the same as #4
What you must see:
Step 3
Once you have Step #2 solved you will be able to use the service
name(FQDN) returned in Step 2 item #4
To be continued...
I am running a Kubernetes cluster on GKE and I noticed that in kube-system the IP addresses of pods with
fluentd-gcp-...
kube-proxy-gke-gke-dev-cluster-default-pool-...
prometheus-to-...
are the same as those of the nodes, while other pods such as
event-exporter-v0.3.0-...
stackdriver-metadata-agent-cluster-level-...
fluentd-gcp-scaler-...
heapster-gke-...
kube-dns-...
l7-default-backend-...
metrics-server-v0.3.3-...
e.g.
kube-system fluentd-gcp-scaler-bfd6cf8dd-58m8j 1/1 Running 0 23h 10.36.1.6 dev-cluster-default-pool-c8a74531-96j4 <none> <none>
kube-system fluentd-gcp-v3.1.1-24n5s 2/2 Running 0 24h 10.10.1.5 dev-cluster-default-pool-c8a74531-96j4 <none> <none>
where the pod IP range is: 10.36.0.0/14
and nodes are on 10.10.1.0/24
have IP addresses in pod address range. What is specific about the first three?
This is because pods such as kube proxy , Fluentd, Prometheus are running in host network directly via hostNetwork: true. You can describe those pods and verify that hostNetwork: true is present.
Now coming to the point why these pods need to run in host network in the first place , kube proxy needs access to host's IP tables, prometheus collects metrics and Fluentd collects logs from the host system.
You can just deploy a sample pod such as nginx with hostNetwork: true and it will get node IP.If you remove hostNetwork: true it will get IP from pod CIDR range.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
restartPolicy: Always
hostNetwork: true
I've applied the yaml for the kubernetes dashboard.
Now I want to expose this service with the public IP of my server: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/#objectives
But there is no service/deployment on my cluster:
$ sudo kubectl get services kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 63d
$ sudo kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
What did I do wrong?
Thanks for the help
The command that you ran is fetching objects in default namespace.
However, Dashboard is deployed on kube-system namespace.
kubectl -n kube-system get services kubernetes
kubectl -n kube-system get deployment
I am giving you this info according to the link that you share kubernetes dashboard . And namely the YAML file
Oky thanks, now I get the right name:
sudo kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 63d
coredns 2/2 2 2 63d
kubernetes-dashboard 1/1 1 1 103m
tiller-deploy 0/1 1 0 63d
But I still can't expose the service
sudo kubectl expose deployment kubernetes-dashboard
Error from server (NotFound): deployments.extensions "kubernetes-dashboard" not found
As mentionned here
SO, to reproduce and show how does it works - I spawned new fresh cluster on GKE.
Lets see what we have after applying dashboard yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
kubectl get deployment kubernetes-dashboard -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 3m22s
kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.6.26 <none> 443/TCP 5m1
kubectl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.0.6.26
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.40.1.5:8443
Session Affinity: None
Events: <none>
During this deployment:
1) kubernetes-dashboard deployment has been created. Note that it was created with the k8s-app=kubernetes-dashboard label.
2) kubernetes-dashboard service was created and works using k8s-app=kubernetes-dashboard [selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/.
So basically when you receive such an error - this is expected. Because kubectl expose deployment kubernetes-dashboard -n kube-system is trying to create new service with the kubernetes-dashboard name.
Just to play with it - you can easily expose the same, but use another service names, for example:
kubectl expose deployment kubernetes-dashboard -n kube-system --name kube-dashboard-service2
service/kube-dashboard-service2 exposed
Note that default kubernetes-dashboard service is created using ClusterIP type - so you are able right now to access it
1) withing the cluster
2) using kubectl proxy from local machine
$ kubectl proxy
In browser: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
If you want to expose the same - you can use:
1) Ingress
2) Nodeport service type
In 2 words: edit clusterIP --> Nodeport type while kubectl -n kube-system edit service kubernetes-dashboard and access dashboard using https://[node_ip]:[port]
More detailed article is here: How To Access Kubernetes Dashboard Externally
3) Loadbalancer service type. This is Cloud specific feature, so it will work only with cloud providers
Traffic from the external load balancer is directed at the backend
Pods. The cloud provider decides how it is load balanced.
Some cloud providers allow you to specify the loadBalancerIP. In those
cases, the load-balancer is created with the user-specified
loadBalancerIP. If the loadBalancerIP field is not specified, the
loadBalancer is set up with an ephemeral IP address. If you specify a
loadBalancerIP but your cloud provider does not support the feature,
the loadbalancerIP field that you set is ignored.
I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.
Following is the output of kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
In the above output, why do some pods have an IP assigned in the 192.168.0.0/24 subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)
This is the config.yml I used to initialize the cluster using kubeadm init --config=config.yml
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
Result of kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
The command I ran was kubectl exec -ti dnstools -- curl 10.96.0.10:53
EDIT:
I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.
Output of kubectl exec -ti dnstools -- nslookup kubernetes.default
;; connection timed out; no servers could be reached
command terminated with exit code 1
Output of kubectl exec dnstools cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
Result of kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:
Edit host files on master and worker to include the IP's and hostnames of the nodes
Disabled swap using swapoff -a and disabled swap permanantly by editing /etc/fstab
Install docker prerequisites using apt-get install apt-transport-https ca-certificates curl software-properties-common -y
Added Docker GPG key using curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
Added Docker repo using add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Docker using apt-get update -y; -get install docker-ce -y
Install Kubernetes prerequisites using curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Added Kubernetes repo using echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update repo and install Kubernetes components using apt-get update -y; apt-get install kubelet kubeadm kubectl -y
Configure master node:
kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16
Copy Kube config to $HOME: mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config
Installed Calico using kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
On node:
On the node I did the kubeadm join command using the command printed out from kubeadm token create --print-join-command on the master
The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.
A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.
You can't access DNS (port 53) with HTTP using curl. You can use dig to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.
You should check on the same namespace
Currently, you are in default namespace and curl to other kube-system namespace.
You should check in the same namespace, I think it works.
On some cases the local host that Elasticsearch publishes is not routable/accessible from other hosts. On these cases you will have to configure network.publish_host in the yml config file, in order for Elasticsearch to use and publish the right address.
Try configuring network.publish_host to the right public address.
See more here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#advanced-network-settings
note that control plane components like api server, etcd that runs on master node are bound to host network. and hence you see the ip address of the master server.
On the other hand, the apps that you deployed are going to get the ips from the pod subnet range. those vary from cluster node ip's
Try below steps to test dns working or not
deploy nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
labels:
app: nginx
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
emptyDir:
kuebctl create -f nginx.yaml
master $ kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
nginx ClusterIP None <none> 80/TCP 2m
master $ kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
Address 2: 10.40.0.2 web-1.nginx.default.svc.cluster.local
/ #
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
/ # nslookup web-0.nginx.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web-0.nginx.default.svc.cluster.local
Address 1: 10.40.0.1 web-0.nginx.default.svc.cluster.local
I'm writing some scripts that check the system to make sure of some cluster characteristics. Things running on private IP address spaces, etc. These checks are just a manual step when setting up a cluster, and used just for sanity checking.
They'll be run on each node, but I'd like a set of them to run when on the master node. Is there a bash, curl, kubectl, or another command that has information indicating the current node is a master node?
The master(s) usually has the 'master' role associated with it. For example:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready master 78d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
ip-x-x-x-x.us-west-2.compute.internal Ready <none> 7d v1.11.2
It also has a label node-role.kubernetes.io/master associated with it. For example:
$ kubectl get node ip-x-x-x-x.us-west-2.compute.internal -o=yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: x.x.x.x/20
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-07-23T21:10:22Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: t3.medium
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: us-west-2
failure-domain.beta.kubernetes.io/zone: us-west-2c
kubernetes.io/hostname: ip-x-x-x-x.us-west-2.compute.internal
node-role.kubernetes.io/master: ""
Some more ways:
$ kubectl cluster-info
Kubernetes master is running at https://node1.example.com:8443
...
You can use kubectl with label selector:
$ kubectl get nodes -l node-role.kubernetes.io/master=true
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 1d v1.10.5
node2.example.com Ready master 1d v1.10.5
And you can get specific data via jsonpath, e.g master IPs/hostnames:
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="InternalIP")].address}'
192.168.168.197 192.168.168.198
$ kubectl get nodes -l node-role.kubernetes.io/master=true -o 'jsonpath={.items[*].status.addresses[?(#.type=="Hostname")].address}'
node1.example.com node2.example.com