K8s: how to change the nameserver for pod when trying to visit some domain outside the cluster - kubernetes

For some reason, nameserver such as 4.2.2.1,208.67.220.220 is not the best option in China. When the pod tries to resolve the domain outside the cluster, daemon set nodelocaldns complains i/o timeout to resolve the domain name
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. A: read udp 192.168.1.15:35630->4.2.2.1:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. AAAA: read udp 192.168.1.15:37137->4.2.2.2:53: i/o timeout
I modified the corefile of coredns in the configmap to use another nameserver 114.114.114.114, but without effect.
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coredns
namespace: kube-system
selfLink: "/api/v1/namespaces/kube-system/configmaps/coredns"
uid: 844355d4-7dd3-11e9-ab0b-0800274131a7
resourceVersion: '919'
creationTimestamp: '2019-05-24T03:25:02Z'
labels:
addonmanager.kubernetes.io/mode: EnsureExists
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa
{\n pods insecure\n upstream /etc/resolv.conf\n fallthrough in-addr.arpa
ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache
30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
'
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 114.114.114.114
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 114.114.114.114
cache 30
loop
reload
loadbalance
}
consul:53 {
errors
cache 30
forward . 10.233.5.74
}
So which configuration I have missed?

You can find the information here. More precisely:
To explicitly force all non-cluster DNS lookups to go through a
specific nameserver at 172.16.0.1, point the proxy and upstream to the
nameserver
proxy . 172.16.0.1
upstream 172.16.0.1

Related

how to manipulate coredns corefile in rke2 k8s cluster?

I'm using rke2 cluster, i.e. a k8s distribution.
And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the corefile of coredns like below.
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
example.org:53 { #加一个block
errors
cache 30
forward . 10.150.0.1
}
However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile.
How should I achieve this. Thank you a lot.
You map or edit the configmap like
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
YAML for ref
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
name: coredns
namespace: kube-system
Other answers for ref :
https://stackoverflow.com/a/73078010/5525824
https://stackoverflow.com/a/70672297/5525824

From kubernetes cluster how to have access to external service with host file?

We need to connect to nginx-test.dup.com on port 80 from our kubernetes cluster.
We are using ExternalName but nginx-test.dup.com is only defined in /etc/hosts.
Is there a way to make that service available from within kubernetes cluster? We also tried adding hostNetwork: true
as described in How do I access external hosts from within my cluster?
and we got the following error:
error validating data: ValidationError(Service.spec): unknown field "hostNetwork" in io.k8s.api.core.v1.ServiceSpec
kind: Service
apiVersion: v1
metadata:
name: nginx-test
spec:
ports:
- name: test-https
port: 8080
targetPort: 80
type: ExternalName
externalName: nginx-test.dup.com
CoreDNS doesn't take /etc/hosts into account. You can add the hosts section to the configMap of the CoreDNS manually.
# kubectl edit cm coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# Add the hosts section from here
hosts {
xxx.xxx.xxx.xxx nginx-test.dup.com
fallthrough
}
# to here
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
...
}
Please note that it will take some time for the new setting to be used.

Is there a reversal of `ExternalName` service in Kubernetes?

I'm trying to move my development environment to Kubernetes to be more in line with existing deployment stages. In that context I need to call a service by its Ingress DNS name internally, while this DNS name resolves to an IP unreachable from the cluster itself. I would like to create a DNS alias inside the cluster which would point to the service, basically a reversal of a ExternalName service.
Example:
The external DNS name is my-service.my-domain.local, resolving to 127.0.0.1
Internal service is my-service.my-namespace.svc.cluster.local
A process running in a pod can't reach my-service.my-domain.local because of the resolved IP, but could reach my-service.my-namespace.svc.cluster.local, but needs to be accessing the former by name
I would like to have a cluster-internal DNS name my-service.my-domain.local, resolving to the service my-service.my-namespace.svc.cluster.local (ExternalName service would do the exact opposite).
Is there a way to implement this in Kubernetes?
You can use the core dns and add the entry over there using configmap
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
name: coredns
namespace: kube-system
data:
Corefile: |
domain-name:port {
errors
cache 30
forward . <IP or custom DNS>
reload
}
To test you can start one busy box pod
kubectl run busybox --restart=Never --image=busybox:1.28 -- sleep 3600
hit the domain name from inside of busy box
kubectl exec busybox -- nslookup domain-name
Official doc ref : https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Nice article for ref : https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
Or
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
Use the Rewrite plug-in of CoreDNS to resolve a specified domain name
to the domain name of a Service.
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
release: cceaddon-coredns
name: coredns
namespace: kube-system

How do I change the external DNS not kubernetes(CoreDNS)?

My pods can't find the URL https://nfe.sefaz.go.gov.br/nfe/services/NFeAutorizacao4.
I did a test and added the DNS 8.8.8.8 and 8.8.4.4 in the /etc/resolve.conf file of one of the pods, and the URL is found.
The file /etc/resolve.conf looks like this
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.245.0.10
nameserver 8.8.8.8
nameserver 8.8.4.4
options ndots:5
My question is:
Is there a correct way to correct the cluster DNS and leave it in an automated way?
We use CoreDNS,
Corefile:
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import custom/*.override
}
import custom/*.server
I solved that by creating the ConfigMap 'coredns-custom', which is the coredns default
It looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
custom.server: |
specific-domain:53 {
log
forward . 8.8.8.8 8.8.4.4
}
replace 'specific-domain' with some specific domain or '*'.

Can we increase the cache only for local traffic inside the configmap of kubernetes kubedns service?

I want to increase the caching limit for traffic coming from within the cluster. for eg if from inside the nginx pod if I do dig to nginx-service then the TTL should be different and if I do dig to google.com then it must be different. Is there any possible way I can achieve this?
Thanks in advance.
In the kubernetes plugin section of coreDNS Corefile you can set TTL to set a custom TTL for responses. The default is 5 seconds. The minimum TTL allowed is 0 seconds, and the maximum is capped at 3600 seconds. Setting TTL to 0 will prevent records from being cached.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30 # Set this to anything between 0 to 3600
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}