I wish to sniff and extract all DNS records from kubernetes: clientIP,serverIP,date,QueryType etc...
I had set up a kuberenetes service.
It is online and running. There I created several containerized micro-services that generate DNS queries (HTTP requests to external addresses). How can I see sniff it ? Is there a way to extract logs with DNS records ?
Given that you use CoreDNS as your cluster DNS service you can configure it to log queries, errors etc. to stdout. CoreDNS have been available as an alternative to kube-dns since k8s version 1.11, so if you're running a cluster of version >1.11 there's a good chance that you're using CoreDNS.
The CoreDNS service usually™️ lives in the kube-system namespace and can be reconfigured using the provided ConfigMap.
Example on how to log everything to stdout, taken from the README:
. {
...
log
...
}
When you've reconfigured CoreDNS you can check the Pod logs with:
kubectl logs -n kube-system <POD NAME>
I have successfully extracted DNS logs , using answer above. My new problem is that I can't see resolution data, i.e. RRDATA, such as resolved IP or other response info?
Related
I am having real trouble understanding how I am suppose to debug my current situation. I have followed the setup instructions from https://docs.substra.org/en/stable/contributing/getting-started.html#
There is a backend service which was created as a ClusterIP, and therefore can not be accessed from the host.
I created a load balancer for this purpose. using the command
kubectl expose deployment deployment_name --port=8000 --target-port=8000 \
--name=lb_service --type=LoadBalancer
However, the attempt to access the backend service failed when I use the LoadBalancer Ingress ip and NodePort port with a connection timeout. I like to see the relevant logs to check where the problem occurred. However, apparently kubectl logs service only shows logs for pods, whereas the load balancer, according to the kubectl expose command is attached to the deployment. Therefore, I am not able to see any logs related either to the load balancer service, or the deployment component.
When I looked at the pod which is supposed to be hosting the deployment, the log showed no error.
Can someone point out where do I look for logs that can debug this failed connectivity?
You probably need to look at the ingress logs, se this page from the documentation: https://kubernetes.github.io/ingress-nginx/troubleshooting/.
it is true that you can only get logs from pods. However, that is sufficient to see the relevant error messages.
I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes
So this has been working forever. I have a few simple services running in GKE and they refer to each other via the standard service.namespace DNS names.
Today all DNS name resolution stopped working. I haven't changed anything, although this may have been triggered by a master upgrade.
/ambassador # nslookup ambassador-monitor.default
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'ambassador-monitor.default': Try again
/ambassador # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local c.snowcloud-01.internal google.internal
nameserver 10.207.0.10
options ndots:5
Master version 1.14.7-gke.14
I can talk cross-service using their IP addresses, it's just DNS that's not working.
Not really sure what to do about this...
The easiest way to verify if there is a problem with your Kube DNS is to look at the logs StackDriver [https://cloud.google.com/logging/docs/view/overview].
You should be able to find DNS resolution failures in the logs for the pods, with a filter such as the following:
resource.type="container"
("UnknownHost" OR "lookup fail" OR "gaierror")
Be sure to check logs for each container. Because the exact names and numbers of containers can change with the GKE version, you can find them like so:
kubectl get pod -n kube-system -l k8s-app=kube-dns -o \
jsonpath='{range .items[*].spec.containers[*]}{.name}{"\n"}{end}' | sort -u kubectl get pods -n kube-system -l k8s-app=kube-dns
Has the pod been restarted frequently? Look for OOMs in the node console. The nodes for each pod can be found like so:
kubectl get pod -n kube-system -l k8s-app=kube-dns -o \
jsonpath='{range .items[*]}{.spec.nodeName} pod={.metadata.name}{"\n"}{end}'
The kube-dns pod contains four containers:
kube-dns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to serve DNS requests,
dnsmasq adds DNS caching to improve performance,
sidecar provides a single health check endpoint while performing dual health checks (for dnsmasq and kubedns). It also collects dnsmasq metrics and exposes them in the Prometheus format,
prometheus-to-sd scraping the metrics exposed by sidecar and sending them to Stackdriver.
By default, the dnsmasq container accepts 150 concurrent requests. Requests beyond this are simply dropped and result in failed DNS resolution, including resolution for metadata. To check for this, view the logs with the following filter:
resource.type="container"resource.labels.cluster_name="<cluster-name>"resource.labels.namespace_id="kube-system"logName="projects/<project-id>/logs/dnsmasq""Maximum number of concurrent DNS queries reached"
If legacy stackdriver logging of cluster is disabled, use the following filter:
resource.type="k8s_container"resource.labels.cluster_name="<cluster-name>"resource.labels.namespace_name="kube-system"resource.labels.container_name="dnsmasq""Maximum number of concurrent DNS queries reached"
If Stackdriver logging is disabled, execute the following:
kubectl logs --tail=1000 --namespace=kube-system -l k8s-app=kube-dns -c dnsmasq | grep 'Maximum number of concurrent DNS queries reached'
Additionally, you can try to use the command [dig ambassador-monitor.default #10.207.0.10] from each nodes to verify if this is only impacting one node. If it is, you can simple re-create the impacted node.
It appears that I hit a bug that caused the gke-metadata server to start crash pooling (which in turn prevented kube-dns from working).
Creating a new pool with a previous version (1.14.7-gke.10) and migrating to it fixed everything.
I am told a fix has already been submitted.
Thank you for your suggestions.
Start by debugging your kubernetes services [1]. This will tell you whether is a k8s resource issue or kubernetes itself is failing. Once you understand that, you can proceed to fix it. You can post results here if you want to follow up.
[1] https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
I have a kubernetes master and node setup in two centos VMs on my Win 10.
I used flannel for CNI and deployed ambassador as an API gateway.
As the ambassador routes did not work, I analysed further to understand that the DNS (ip-10.96.0.10) is not accessible from busybox pod which means that none of the service names can be accessed. Could I get any suggestion please.
1. You should use newest version of Flannel.
Flannel does not setup service IPs but kube-proxy does, you should look at kube-proxy on your nodes and ensure they are not reporting errors.
I'd suggest taking a look at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tabs-pod-install-4 and ensure you have met the requirements stated there.
Similar issue but with Calico plugin you can find here: https://github.com/projectcalico/calico/issues/1798
2. Check if you have open port 8285, flannel uses UDP port 8285 for sending encapsulated IP packets. Make sure to enable this traffic to pass between the hosts.
3. Ambassador includes an integrated diagnostics service to help with troubleshooting, this may be useful for you. By default, this is not exposed to the Internet. To view it, we'll need to get the name of one of the Ambassador pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-3655608000-43x86 1/1 Running 0 2m
ambassador-3655608000-w63zf 1/1 Running 0 2m
Forwarding local port 8877 to one of the pods:
kubectl port-forward ambassador-3655608000-43x86 8877
will then let us view the diagnostics at http://localhost:8877/ambassador/v0/diag/.
First spot should solve your problem, if not, try remainings.
I hope this helps.
My pod (pod1) internally can connect to another pod using its service like the following:
pod2-service.namespace.svc.cluster.local
However, I want pod1 to connect to pod2 using a URL like abc.com which is not registered in a DNS. Basically, I want pod1 to resolve abc.com as pod2-service.namespace.svc.cluster.local.
I was looking at hostAliases here:
https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/.
However, it needs an IP. How can I do this in Kubernetes?
I think you can use a fixed ip as the service ip of your pod2, then use this ip in your hostalias definition.
There are a couple of things:
StatefulSets where you will always know the pod name and you can find it based on its ordinal index.
Using Pod hostname and subdomain spec field (Only works for standalone pods, afaik)
However, pod to pod doesn't seem to be natively supported by Kubernetes in Deployments, my guess the rationale here is that the pods can constantly change IP addresses and names. You could use Pod default DNS entries but again the DNS entries will vary depending on the IP addresses that are assigned to pods.
The other solution that I can think of for Deployments is to use something like Consul with stub domains, then on each pod you will have to add an initContainer or consul agent sidecar to register its IP with the consul service, every time a pod restarts it will need to overwrite the DNS registration in Consul.
If you don't want to use stub domain there's also the option of using Pod DNS Configs.
you can get the service ip and append to /etc/hosts in pod1 before your application code running.
echo "$(getent hosts pod2-service.namespace.svc.cluster.local | awk '{ print $1 }') abc.com" >> /etc/hosts
Notice: It is pretty hacky because you should guarantee service ip of pod2 is fixed. When service ip changed, pod1 will fail to reslove the host.