Minikube has the specific node ip address (192.168.99.100) for single cluster, if I using kubeadm to create many nodes cluster, what should I do to find this ip address?
This should be fairly straightforward: kubectl get nodes -o wide
To get informations about Kubernetes objects you should use kubectl get <resource> or kubectl describe <resource>.
In docs
Display one or many resources
Prints a table of the most important information about the specified resources. You can filter the list using a label selector and the --selector flag. If the desired resource type is namespaced you will only see results in your current namespace unless you pass --all-namespaces.
If you will check manual for kubectl get you will get information about -o flag.
-o, --output='': Output format. One of:
json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
See custom columns [http://kubernetes.io/docs/user-guide/kubectl-overview/#custom-columns], golang template
[http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template
[http://kubernetes.io/docs/user-guide/jsonpath].
Thats mean you can get output in YAMLs or JSON format. Detailed information can be found in this doc.
As #Bernard Halas mentioned, you can just use kubectl get nodes -o wide.
Another option is use describe with grep. -A will print number lines of trailing context. Its helpful if you need to get list about information per node.
$ kubectl describe node | grep Addresses: -A 4
Addresses:
InternalIP: 10.164.0.63
ExternalIP: 35.204.67.223
InternalDNS: gke-test-default-pool-d11b1330-g44z.c.composite-rune-239911.internal
Hostname: gke-test-default-pool-d11b1330-g44z.c.composite-rune-239911.internal
--
Addresses:
InternalIP: 10.164.0.61
ExternalIP: 35.204.63.113
InternalDNS: gke-test-default-pool-d11b1330-gtpj.c.composite-rune-239911.internal
Hostname: gke-test-default-pool-d11b1330-gtpj.c.composite-rune-239911.internal
--
Addresses:
InternalIP: 10.164.0.62
ExternalIP: 35.204.202.107
InternalDNS: gke-test-default-pool-d11b1330-r4dw.c.composite-rune-239911.internal
Hostname: gke-test-default-pool-d11b1330-r4dw.c.composite-rune-239911.internal
You can also use YAML or JSON format. Output will be similar to previous one.
$ kubectl get nodes -o yaml | grep addresses: -A 8
addresses:
- address: 10.164.0.63
type: InternalIP
- address: 35.204.67.223
type: ExternalIP
- address: gke-test-default-pool-d11b1330-g44z.c.composite-rune-239911.internal
type: InternalDNS
- address: gke-test-default-pool-d11b1330-g44z.c.composite-rune-239911.internal
type: Hostname
...
In addition if you will need some specific output (only information you need and ar not print as default) you can use custom columns. It's based on YAML format.
$ kubectl get pods -o custom-columns=Name:.metadata.name,NS:.metadata.namespace,HostIP:.status.hostIP,PodIP:status.podIP,REQ_CPU:.spec.containers[].resources.requests.cpu
Name NS HostIP PodIP REQ_CPU
httpd-5d8cbbcd67-gtzcx default 10.164.0.63 10.32.2.7 100m
nginx-7cdbd8cdc9-54dds default 10.164.0.62 10.32.1.5 100m
nginx-7cdbd8cdc9-54ggt default 10.164.0.62 10.32.1.3 100m
nginx-7cdbd8cdc9-bz86v default 10.164.0.62 10.32.1.4 100m
nginx-7cdbd8cdc9-zcvrf default 10.164.0.62 10.32.1.2 100m
nginx-test-59df8dcb7f-hlrcr default 10.164.0.63 10.32.2.4 100m
generally, there is no such thing as a single IP of a kubernetes cluster. Minikube has it, because it's a special 1 node case. Most production clusters will be one way or another operating with many network-internal, cluster-internal and external IP addresses. For example each node is usually deployed on a separate (virtual) machine that has it's own IP address, either external or network-internal (like 10.x.y.z or 192.168.x.y) depending on your network setup. Moreover, many kubernetes objects, like pods or services have their IPs also (cluster-internal or external).
Now the question is what do you need the IP for:
if you are looking for the address of your kubernetes API endpoint server (the endpoint to which kubectl talks) then in case of clusters created manually with kubeadm, this will be the IP of the master node that you created with kubeadm init command (assuming single master case). See this official doc for details. To talk to your cluster using kubectl you will need some authorization data except its IP: see subsequent sections of the mentioned document how to obtain it.
if you are looking for the IP of a LoadBalancer type service, then it will be reported among lots of other stuff in the output of kubectl get service name-of-your-service -o yaml or kubectl describe service name-of-your-service. Note however that clusters created with kubeadm don't provide external load-balancers on their own (that's why they are called external) and if you intend to setup a fully functional production cluster manually, you will need to use something like MetalLB in addition.
if you are looking for IPs of NodePort type services then these will be all the IPs of worker node (virtual) machines that you assimilated into you cluster by running kubeadm join command on them. if you don't remember them then you can use kubectl get nodes -o wide as suggested in the other answer.
This command display the nodes name, private and public IP addresses.
kubectl get nodes -o wide | awk -v OFS='\t\t' '{print $1, $6, $7}'
Here's a command that should show the internal IP addresses of each node in the cluser:
ubuntu#astrocyte:~$ kubectl get nodes -o yaml | grep -- "- address:"
- address: 192.168.1.6
- address: astrocyte
- address: 192.168.1.20
- address: axon2.local
- address: 192.168.1.7
- address: axon3.local
It also shows hostnames, if you have them configured
Related
I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.
I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes
Is there any direct command to fetch the podcidr assigned to each node when using calico CNI.
I am looking for exact network and netmask assigned to each node. I am not able to fetch it from kubectl get nodes neither via podCIDR value nor via projectcalico.org/IPv4VXLANTunnelAddr annotation. Also looks like the annotation will also differ based on VXLAN or IPIP tunnel used by calico.
Tried to fetch via podCIDR key from nodes. Got the following output. Which wasn't the network assigned to the nodes.
kubectl get nodes -oyaml | grep -i podcidr -B 1
spec:
podCIDR: 192.168.0.0/24
podCIDRs:
--
spec:
podCIDR: 192.168.2.0/24
podCIDRs:
Tried to fetch it via calico annotation. Was able to find the network but the netmask was missing.
kubectl get nodes -oyaml | grep -i ipv4vxlan
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.33.64
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.253.192
Tried to fetch it via calico pod. Found the exact network and netmask i.e 192.168.33.64/26 from the calico log.
kubectl logs calico-node-h2s9w -n calico-system | grep cidr
2020-12-14 06:54:50.783 [INFO][18] tunnel-ip-allocator/ipam.go 140:
Attempting to load block cidr=192.168.33.64/26 host="calico-master"
But i want to avoid looking at logs of calico pod on each node.
Is there a better way to find the podcidr assigned to each node via a single command.
You can use etcdctl to know details of subnet block assigned to each node.
ETCDCTL_API=3 etcdctl ls /calico/ipam/v2/host/node1/ipv4/block/
Above example for a node node1 will give something like below as output.
/calico/ipam/v2/host/node1/ipv4/block/192.168.228.192-26
Looks like calico adds a custom resource called ipamblocks and it contains the podcidr assigned to each cluster node.
The name of the custom resource itself contains the node's podcidr.
kubectl get ipamblocks.crd.projectcalico.org
NAME AGE
10-42-123-0-26 89d
10-42-187-192-26 89d
Command to fetch the exact podcidr and nodeip:
kubectl get ipamblocks.crd.projectcalico.org -o jsonpath="{range .items[*]}{'podNetwork: '}{.spec.cidr}{'\t NodeIP: '}{.spec.affinity}{'\n'}"
podNetwork: 10.42.123.0/26 NodeIP: host:<node1-ip>
podNetwork: 10.42.187.192/26 NodeIP: host:<node2-ip>
So this has been working forever. I have a few simple services running in GKE and they refer to each other via the standard service.namespace DNS names.
Today all DNS name resolution stopped working. I haven't changed anything, although this may have been triggered by a master upgrade.
/ambassador # nslookup ambassador-monitor.default
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'ambassador-monitor.default': Try again
/ambassador # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local c.snowcloud-01.internal google.internal
nameserver 10.207.0.10
options ndots:5
Master version 1.14.7-gke.14
I can talk cross-service using their IP addresses, it's just DNS that's not working.
Not really sure what to do about this...
The easiest way to verify if there is a problem with your Kube DNS is to look at the logs StackDriver [https://cloud.google.com/logging/docs/view/overview].
You should be able to find DNS resolution failures in the logs for the pods, with a filter such as the following:
resource.type="container"
("UnknownHost" OR "lookup fail" OR "gaierror")
Be sure to check logs for each container. Because the exact names and numbers of containers can change with the GKE version, you can find them like so:
kubectl get pod -n kube-system -l k8s-app=kube-dns -o \
jsonpath='{range .items[*].spec.containers[*]}{.name}{"\n"}{end}' | sort -u kubectl get pods -n kube-system -l k8s-app=kube-dns
Has the pod been restarted frequently? Look for OOMs in the node console. The nodes for each pod can be found like so:
kubectl get pod -n kube-system -l k8s-app=kube-dns -o \
jsonpath='{range .items[*]}{.spec.nodeName} pod={.metadata.name}{"\n"}{end}'
The kube-dns pod contains four containers:
kube-dns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to serve DNS requests,
dnsmasq adds DNS caching to improve performance,
sidecar provides a single health check endpoint while performing dual health checks (for dnsmasq and kubedns). It also collects dnsmasq metrics and exposes them in the Prometheus format,
prometheus-to-sd scraping the metrics exposed by sidecar and sending them to Stackdriver.
By default, the dnsmasq container accepts 150 concurrent requests. Requests beyond this are simply dropped and result in failed DNS resolution, including resolution for metadata. To check for this, view the logs with the following filter:
resource.type="container"resource.labels.cluster_name="<cluster-name>"resource.labels.namespace_id="kube-system"logName="projects/<project-id>/logs/dnsmasq""Maximum number of concurrent DNS queries reached"
If legacy stackdriver logging of cluster is disabled, use the following filter:
resource.type="k8s_container"resource.labels.cluster_name="<cluster-name>"resource.labels.namespace_name="kube-system"resource.labels.container_name="dnsmasq""Maximum number of concurrent DNS queries reached"
If Stackdriver logging is disabled, execute the following:
kubectl logs --tail=1000 --namespace=kube-system -l k8s-app=kube-dns -c dnsmasq | grep 'Maximum number of concurrent DNS queries reached'
Additionally, you can try to use the command [dig ambassador-monitor.default #10.207.0.10] from each nodes to verify if this is only impacting one node. If it is, you can simple re-create the impacted node.
It appears that I hit a bug that caused the gke-metadata server to start crash pooling (which in turn prevented kube-dns from working).
Creating a new pool with a previous version (1.14.7-gke.10) and migrating to it fixed everything.
I am told a fix has already been submitted.
Thank you for your suggestions.
Start by debugging your kubernetes services [1]. This will tell you whether is a k8s resource issue or kubernetes itself is failing. Once you understand that, you can proceed to fix it. You can post results here if you want to follow up.
[1] https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations.
Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.
The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod)
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Recreate the pod/rc and then try
echo $MY_POD_IP
also run env to see what else kubernetes provides you with.
Some clarifications (not really an answer)
In kubernetes, every pod gets assigned an IP address, and every container in the pod gets assigned that same IP address. Thus, as Alex Robinson stated in his answer, you can just use hostname -i inside your container to get the pod IP address.
I tested with a pod running two dumb containers, and indeed hostname -i was outputting the same IP address inside both containers. Furthermore, that IP was equivalent to the one obtained using kubectl describe pod from outside, which validates the whole thing IMO.
However, PiersyP's answer seems more clean to me.
Sources
From kubernetes docs:
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat shared networking space that has full communication with other physical computers and pods across the network.
Another piece from kubernetes docs:
Until now this document has talked about containers. In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost.
kubectl describe pods <name of pod> will give you some information including the IP
kubectl get pods -o wide
Give you a list of pods with name, status, ip, node...
POD_HOST=$(kubectl get pod $POD_NAME --template={{.status.podIP}})
This command will return you an IP
The container's IP address should be properly configured inside of its network namespace, so any of the standard linux tools can get it. For example, try ifconfig, ip addr show, hostname -I, etc. from an attached shell within one of your containers to test it out.
You could use
kubectl describe pod `hostname` | grep IP | sed -E 's/IP:[[:space:]]+//'
which is based on what #mibbit suggested.
This takes the following facts into account:
hostname is set to POD's name but this might change in the future
kubectl was manually placed in the container (possibly when the image was built)
Kubernetes provides a service account credential to the container implicitly as described in Accessing the Cluster / Accessing the API from a Pod, i.e. /var/run/secrets/kubernetes.io/serviceaccount in the container
Even simpler to remember than the sed approach is to use awk.
Here is an example, which you can run on your local machine:
kubectl describe pod `<podName>` | grep IP | awk '{print $2}'
The IP itself is on column 2 of the output, hence $2 .
In some cases, instead of relying on downward API, programmatically reading the local IP address (from network interfaces) from inside of the container also works.
For example, in golang:
https://stackoverflow.com/a/31551220/6247478
Containers have the same IP with the pod they are in.
So from inside the container you can just do ip a and the IP you get is the one the pod has also.