How do I change the external DNS not kubernetes(CoreDNS)? - kubernetes

My pods can't find the URL https://nfe.sefaz.go.gov.br/nfe/services/NFeAutorizacao4.
I did a test and added the DNS 8.8.8.8 and 8.8.4.4 in the /etc/resolve.conf file of one of the pods, and the URL is found.
The file /etc/resolve.conf looks like this
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.245.0.10
nameserver 8.8.8.8
nameserver 8.8.4.4
options ndots:5
My question is:
Is there a correct way to correct the cluster DNS and leave it in an automated way?
We use CoreDNS,
Corefile:
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import custom/*.override
}
import custom/*.server

I solved that by creating the ConfigMap 'coredns-custom', which is the coredns default
It looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
custom.server: |
specific-domain:53 {
log
forward . 8.8.8.8 8.8.4.4
}
replace 'specific-domain' with some specific domain or '*'.

Related

how to manipulate coredns corefile in rke2 k8s cluster?

I'm using rke2 cluster, i.e. a k8s distribution.
And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the corefile of coredns like below.
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
example.org:53 { #加一个block
errors
cache 30
forward . 10.150.0.1
}
However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile.
How should I achieve this. Thank you a lot.
You map or edit the configmap like
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
YAML for ref
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
name: coredns
namespace: kube-system
Other answers for ref :
https://stackoverflow.com/a/73078010/5525824
https://stackoverflow.com/a/70672297/5525824

From kubernetes cluster how to have access to external service with host file?

We need to connect to nginx-test.dup.com on port 80 from our kubernetes cluster.
We are using ExternalName but nginx-test.dup.com is only defined in /etc/hosts.
Is there a way to make that service available from within kubernetes cluster? We also tried adding hostNetwork: true
as described in How do I access external hosts from within my cluster?
and we got the following error:
error validating data: ValidationError(Service.spec): unknown field "hostNetwork" in io.k8s.api.core.v1.ServiceSpec
kind: Service
apiVersion: v1
metadata:
name: nginx-test
spec:
ports:
- name: test-https
port: 8080
targetPort: 80
type: ExternalName
externalName: nginx-test.dup.com
CoreDNS doesn't take /etc/hosts into account. You can add the hosts section to the configMap of the CoreDNS manually.
# kubectl edit cm coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# Add the hosts section from here
hosts {
xxx.xxx.xxx.xxx nginx-test.dup.com
fallthrough
}
# to here
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
...
}
Please note that it will take some time for the new setting to be used.

Can we increase the cache only for local traffic inside the configmap of kubernetes kubedns service?

I want to increase the caching limit for traffic coming from within the cluster. for eg if from inside the nginx pod if I do dig to nginx-service then the TTL should be different and if I do dig to google.com then it must be different. Is there any possible way I can achieve this?
Thanks in advance.
In the kubernetes plugin section of coreDNS Corefile you can set TTL to set a custom TTL for responses. The default is 5 seconds. The minimum TTL allowed is 0 seconds, and the maximum is capped at 3600 seconds. Setting TTL to 0 will prevent records from being cached.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30 # Set this to anything between 0 to 3600
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}

Configure custom DNS in kubernetes

I would like to configure custom DNS in CoreDNS (to bypass NAT loopback issue, meaning that within the network, IP are not resolved the same than outside the network).
I tried to modify ConfigMap for CoreDNS with a 'fake' domain just to test, but it does not work.
I am using minik8s
Here the config file of config map coredns:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
kind: ConfigMap
Then I try to resolve this address using busy box, but it does not work.
$kubectl exec -ti busybox -- nslookup test.consul.local
> nslookup: can't resolve 'test.consul.local'
command terminated with exit code 1
Even kubernetes DNS is failing
$ kubectl exec -ti busybox -- nslookup kubernetes.default
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
I've reproduced your scenario and it works as intended.
Here I'll describe two different ways to use custom DNS on Kubernetes. The first is in Pod level. You can customize what DNS server your pod will use. This is useful in specific cases where you don't want to change this configuration for all pods.
To achieve this, you need to add some optional fields. To know more about it, please read this.
Example:
kind: Pod
metadata:
name: busybox-custom
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
restartPolicy: Always
$ kubectl exec -ti busybox-custom -- nslookup cnn.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: cnn.com
Address 1: 2a04:4e42::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42:200::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.129.67
Address 7: 151.101.193.67
Address 8: 151.101.1.67
$ kubectl exec -ti busybox-custom -- nslookup kubernetes.default
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
As you can see, this method will create problem to resolve internal DNS names.
The second way to achieve that, is to change the DNS on a Cluster level. This is the way you choose and as you can see.
$ kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
As you can see, I don't have consul.local:53 entry.
Consul is a service networking solution to connect and secure services
across any runtime platform and public or private cloud
This kind of setup is not common and I don't think you need to include this entry in your setup. This might be your issue and when I add this entry, I face the same issues you reported.
$ kubectl exec -ti busybox -- nslookup cnn.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cnn.com
Address 1: 2a04:4e42:200::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.193.67
Address 7: 151.101.1.67
Address 8: 151.101.129.67
$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
Another main problem is that you are debugging DNS using the latest busybox image. I highly recommend you to avoid any version newer than 1.28 as it has come know problems regarding name resolution.
The best busybox image you can use to troubleshot DNS is 1.28 as Oleg Butuzov recommended on the comments.

K8s: how to change the nameserver for pod when trying to visit some domain outside the cluster

For some reason, nameserver such as 4.2.2.1,208.67.220.220 is not the best option in China. When the pod tries to resolve the domain outside the cluster, daemon set nodelocaldns complains i/o timeout to resolve the domain name
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. A: read udp 192.168.1.15:35630->4.2.2.1:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. AAAA: read udp 192.168.1.15:37137->4.2.2.2:53: i/o timeout
I modified the corefile of coredns in the configmap to use another nameserver 114.114.114.114, but without effect.
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coredns
namespace: kube-system
selfLink: "/api/v1/namespaces/kube-system/configmaps/coredns"
uid: 844355d4-7dd3-11e9-ab0b-0800274131a7
resourceVersion: '919'
creationTimestamp: '2019-05-24T03:25:02Z'
labels:
addonmanager.kubernetes.io/mode: EnsureExists
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa
{\n pods insecure\n upstream /etc/resolv.conf\n fallthrough in-addr.arpa
ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache
30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
'
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 114.114.114.114
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 114.114.114.114
cache 30
loop
reload
loadbalance
}
consul:53 {
errors
cache 30
forward . 10.233.5.74
}
So which configuration I have missed?
You can find the information here. More precisely:
To explicitly force all non-cluster DNS lookups to go through a
specific nameserver at 172.16.0.1, point the proxy and upstream to the
nameserver
proxy . 172.16.0.1
upstream 172.16.0.1