Can we increase the cache only for local traffic inside the configmap of kubernetes kubedns service? - kubernetes

I want to increase the caching limit for traffic coming from within the cluster. for eg if from inside the nginx pod if I do dig to nginx-service then the TTL should be different and if I do dig to google.com then it must be different. Is there any possible way I can achieve this?
Thanks in advance.

In the kubernetes plugin section of coreDNS Corefile you can set TTL to set a custom TTL for responses. The default is 5 seconds. The minimum TTL allowed is 0 seconds, and the maximum is capped at 3600 seconds. Setting TTL to 0 will prevent records from being cached.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30 # Set this to anything between 0 to 3600
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}

Related

how to manipulate coredns corefile in rke2 k8s cluster?

I'm using rke2 cluster, i.e. a k8s distribution.
And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the corefile of coredns like below.
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
example.org:53 { #加一个block
errors
cache 30
forward . 10.150.0.1
}
However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile.
How should I achieve this. Thank you a lot.
You map or edit the configmap like
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
YAML for ref
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
name: coredns
namespace: kube-system
Other answers for ref :
https://stackoverflow.com/a/73078010/5525824
https://stackoverflow.com/a/70672297/5525824

From kubernetes cluster how to have access to external service with host file?

We need to connect to nginx-test.dup.com on port 80 from our kubernetes cluster.
We are using ExternalName but nginx-test.dup.com is only defined in /etc/hosts.
Is there a way to make that service available from within kubernetes cluster? We also tried adding hostNetwork: true
as described in How do I access external hosts from within my cluster?
and we got the following error:
error validating data: ValidationError(Service.spec): unknown field "hostNetwork" in io.k8s.api.core.v1.ServiceSpec
kind: Service
apiVersion: v1
metadata:
name: nginx-test
spec:
ports:
- name: test-https
port: 8080
targetPort: 80
type: ExternalName
externalName: nginx-test.dup.com
CoreDNS doesn't take /etc/hosts into account. You can add the hosts section to the configMap of the CoreDNS manually.
# kubectl edit cm coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# Add the hosts section from here
hosts {
xxx.xxx.xxx.xxx nginx-test.dup.com
fallthrough
}
# to here
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
...
}
Please note that it will take some time for the new setting to be used.

How do I change the external DNS not kubernetes(CoreDNS)?

My pods can't find the URL https://nfe.sefaz.go.gov.br/nfe/services/NFeAutorizacao4.
I did a test and added the DNS 8.8.8.8 and 8.8.4.4 in the /etc/resolve.conf file of one of the pods, and the URL is found.
The file /etc/resolve.conf looks like this
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.245.0.10
nameserver 8.8.8.8
nameserver 8.8.4.4
options ndots:5
My question is:
Is there a correct way to correct the cluster DNS and leave it in an automated way?
We use CoreDNS,
Corefile:
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
import custom/*.override
}
import custom/*.server
I solved that by creating the ConfigMap 'coredns-custom', which is the coredns default
It looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
custom.server: |
specific-domain:53 {
log
forward . 8.8.8.8 8.8.4.4
}
replace 'specific-domain' with some specific domain or '*'.

Add a simple A record to the CoreDNS service on Kubernetes

Here is the issue:
We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.
within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified.
Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.
To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance)
for instance, I'd like to tell every microk8s environnements that:
192.168.1.4 my-nas.mydomain.local
... like I would within the /etc/hosts file.
Is there a proper way to achieve this?
I tried to follow the advices on this link: Is there a way to add arbitrary records to kube-dns?
(the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work
I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.
Best Regards,
You can set you custom DNS in K8s using the Kube-DNS (Core-DNS)
You have to inject/pass the configuration file as configmap to Core DNS volume.
Configmap will look like
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
You read more about at :
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
https://platform9.com/kb/kubernetes/how-to-customize-coredns-configuration-for-adding-additional-ext
Or else you can also use the external DNS with the Core DNS
You can annotate the service(resource) and external DNS will add the address to core-dns
Read more about it at :
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md
https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deploy-mcp-cluster-using-drivetrain/deploy-k8s/external-dns/verify-external-dns/coredns-etxdns-verify.html
...we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed...
For this you can try headless service without touching coreDNS:
apiVersion: v1
kind: Service
metadata:
name: my-nas
namespace: kube-system # <-- you can place it somewhere else
labels:
app: my-nas
spec:
ports:
- protocol: TCP
port: <nas port>
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-nas
subsets:
- addresses:
- ip: 192.168.1.4
ports:
- port: <nas port>
Use it as: my-nas.kube-system.svc.cluster.local

K8s: how to change the nameserver for pod when trying to visit some domain outside the cluster

For some reason, nameserver such as 4.2.2.1,208.67.220.220 is not the best option in China. When the pod tries to resolve the domain outside the cluster, daemon set nodelocaldns complains i/o timeout to resolve the domain name
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. A: read udp 192.168.1.15:35630->4.2.2.1:53: i/o timeout
[ERROR] plugin/errors: 2 checkpoint-api.hashicorp.com. AAAA: read udp 192.168.1.15:37137->4.2.2.2:53: i/o timeout
I modified the corefile of coredns in the configmap to use another nameserver 114.114.114.114, but without effect.
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coredns
namespace: kube-system
selfLink: "/api/v1/namespaces/kube-system/configmaps/coredns"
uid: 844355d4-7dd3-11e9-ab0b-0800274131a7
resourceVersion: '919'
creationTimestamp: '2019-05-24T03:25:02Z'
labels:
addonmanager.kubernetes.io/mode: EnsureExists
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa
{\n pods insecure\n upstream /etc/resolv.conf\n fallthrough in-addr.arpa
ip6.arpa\n }\n prometheus :9153\n forward . /etc/resolv.conf\n cache
30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
'
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 114.114.114.114
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 114.114.114.114
cache 30
loop
reload
loadbalance
}
consul:53 {
errors
cache 30
forward . 10.233.5.74
}
So which configuration I have missed?
You can find the information here. More precisely:
To explicitly force all non-cluster DNS lookups to go through a
specific nameserver at 172.16.0.1, point the proxy and upstream to the
nameserver
proxy . 172.16.0.1
upstream 172.16.0.1