I'm trying to move my development environment to Kubernetes to be more in line with existing deployment stages. In that context I need to call a service by its Ingress DNS name internally, while this DNS name resolves to an IP unreachable from the cluster itself. I would like to create a DNS alias inside the cluster which would point to the service, basically a reversal of a ExternalName service.
Example:
The external DNS name is my-service.my-domain.local, resolving to 127.0.0.1
Internal service is my-service.my-namespace.svc.cluster.local
A process running in a pod can't reach my-service.my-domain.local because of the resolved IP, but could reach my-service.my-namespace.svc.cluster.local, but needs to be accessing the former by name
I would like to have a cluster-internal DNS name my-service.my-domain.local, resolving to the service my-service.my-namespace.svc.cluster.local (ExternalName service would do the exact opposite).
Is there a way to implement this in Kubernetes?
You can use the core dns and add the entry over there using configmap
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
name: coredns
namespace: kube-system
data:
Corefile: |
domain-name:port {
errors
cache 30
forward . <IP or custom DNS>
reload
}
To test you can start one busy box pod
kubectl run busybox --restart=Never --image=busybox:1.28 -- sleep 3600
hit the domain name from inside of busy box
kubectl exec busybox -- nslookup domain-name
Official doc ref : https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Nice article for ref : https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
Or
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
Use the Rewrite plug-in of CoreDNS to resolve a specified domain name
to the domain name of a Service.
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
release: cceaddon-coredns
name: coredns
namespace: kube-system
Related
We need to connect to nginx-test.dup.com on port 80 from our kubernetes cluster.
We are using ExternalName but nginx-test.dup.com is only defined in /etc/hosts.
Is there a way to make that service available from within kubernetes cluster? We also tried adding hostNetwork: true
as described in How do I access external hosts from within my cluster?
and we got the following error:
error validating data: ValidationError(Service.spec): unknown field "hostNetwork" in io.k8s.api.core.v1.ServiceSpec
kind: Service
apiVersion: v1
metadata:
name: nginx-test
spec:
ports:
- name: test-https
port: 8080
targetPort: 80
type: ExternalName
externalName: nginx-test.dup.com
CoreDNS doesn't take /etc/hosts into account. You can add the hosts section to the configMap of the CoreDNS manually.
# kubectl edit cm coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# Add the hosts section from here
hosts {
xxx.xxx.xxx.xxx nginx-test.dup.com
fallthrough
}
# to here
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
...
}
Please note that it will take some time for the new setting to be used.
Here is the issue:
We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.
within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified.
Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.
To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance)
for instance, I'd like to tell every microk8s environnements that:
192.168.1.4 my-nas.mydomain.local
... like I would within the /etc/hosts file.
Is there a proper way to achieve this?
I tried to follow the advices on this link: Is there a way to add arbitrary records to kube-dns?
(the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work
I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.
Best Regards,
You can set you custom DNS in K8s using the Kube-DNS (Core-DNS)
You have to inject/pass the configuration file as configmap to Core DNS volume.
Configmap will look like
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
You read more about at :
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
https://platform9.com/kb/kubernetes/how-to-customize-coredns-configuration-for-adding-additional-ext
Or else you can also use the external DNS with the Core DNS
You can annotate the service(resource) and external DNS will add the address to core-dns
Read more about it at :
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md
https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deploy-mcp-cluster-using-drivetrain/deploy-k8s/external-dns/verify-external-dns/coredns-etxdns-verify.html
...we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed...
For this you can try headless service without touching coreDNS:
apiVersion: v1
kind: Service
metadata:
name: my-nas
namespace: kube-system # <-- you can place it somewhere else
labels:
app: my-nas
spec:
ports:
- protocol: TCP
port: <nas port>
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-nas
subsets:
- addresses:
- ip: 192.168.1.4
ports:
- port: <nas port>
Use it as: my-nas.kube-system.svc.cluster.local
I have seen that the standard way to access http services through the kubectl proxy is the following:
http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
Why is it that the kubernetes-dashboard uses https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
I would assume from the following that it would be kubernetes_dashboard:443.
kubectl -n kube-system get service kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard ClusterIP 10.233.50.212 <none> 443:31663/TCP 15d k8s-app=kubernetes-dashboard
Additionally, what is the meaning of the port show 443:31663 when all other services will just have x/TCP (x being one number instead of x:y)
Lastly, kubectl cluster-info will show
Kubernetes master is running at https://x.x.x.x:x
kubernetes-dashboard is running at https://x.x.x.x:x/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
I have created a simple service but it does not show here and I am confused how to determine what services show here or not.
Why is it that the kubernetes-dashboard uses
https:kubernetes-dashboard: for SERVICE_NAME:SERVICE_PORT?
Additionally, what is the meaning of the port show 443:31663 when all
other services will just have x/TCP (x being one number instead of
x:y)
As described in Manually constructing apiserver proxy URLs, the default way is
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
By default, the API server proxies to your service using http. To use
https, prefix the service name with https::
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
The supported formats for the name segment of the URL are:
<service_name> - proxies to the default or unnamed port using http
<service_name>:<port_name> - proxies to the specified port using http
https:<service_name>: - proxies to the default or unnamed port using https (note the trailing colon)
https:<service_name>:<port_name> - proxies to the specified port using https
Next:
I have created a simple service but it does not show here and I am
confused how to determine what services show here or not.
What is what I found and tested for you:
cluster-info API reference:
Display addresses of the master and services with label kubernetes.io/cluster-service=true To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, as soon as you add kubernetes.io/cluster-service: "true" label - the service starts to be seen under kubectl cluster-info.
BUT!! There is an expected behavior when you see that you service disappear from output in couple of minutes. Explanation has been found here - I only copy paste it here for future reference.
The other part is the addon manager. It uses this annotation to
synchronizes the cluster state with static manifest files. The
behavior was something like this:
1) addon manager reads a yaml from disk -> deploys the contents
2) addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files
As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.
So,
dashboard is deployed after cluster creation -> annotation should not be set:
https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml
dashboard is deployed part of cluster creation -> annotation should be set:
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dashboard/dashboard-controller.yaml
At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.
Solution for this behavior also exists: add additional label addonmanager.kubernetes.io/mode: EnsureExists
Explanation is here
You final service should look like:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl get svc kubernetes-dashboard -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists","k8s-app":"kubernetes-dashboard","kubernetes.io/cluster-service":"true"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
kubectl cluster-info
Kubernetes master is running at https://*.*.*.*
...
kubernetes-dashboard is running at https://*.*.*.*/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
...
I tried whitelisting IP address/es in my kubernetes cluster's incoming traffic using this example :
Although this works as expected, wanted to go a step further and try if I can use istio gateways or virtual service when I set up Istio rule, instead of Loadbalancer(ingressgateway).
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
namespace: my-namespace
spec:
match: source.labels["app"] == "my-app"
actions:
- handler: whitelistip.listchecker
instances:
- sourceip.listentry
---
Where my-app is of kind: Gateway with a certain host and port, and labelled app=my-app.
Am using istio version 1.1.1
Also my cluster has all the istio-system running with envoy sidecars on almost all service pods.
You confuse one thing that, in above rule, match: source.labels["app"] == "my-app" is not referring to any resource's label, but to pod's label.
From OutputTemplate Documentation:
sourceLabels | Refers to source pod labels.
attributebindings can refer to this field using $out.sourcelabels
And you can verify by looking for resources with "app=istio-ingressgateway" label via:
kubectl get pods,svc -n istio-system -l "app=istio-ingressgateway" --show-labels
You can check this blog from istio about Mixer Adapter Model, to understand complete mixer model, its handlers,instances and rules.
Hope it helps!
I have a service and ingress setup on my minikube kubernetes cluster which exposes the domain name hello.life.com
Now I need to access this domain from inside another pod as
curl http://hello.life.com
and this should return proper html
My service is as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-svc
namespace: abc
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
name: bulging-zorse-key
type: ClusterIP
status:
loadBalancer: {}
My ingress is as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-ingress
namespace: dev
spec:
rules:
- host: hello.life.com
http:
paths:
- backend:
serviceName: bulging-zorse-key-svc
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- {}
Can someone please help me out as to what changes do I need to make to get it working?
Thanks in advance!!!
I found a good explanation of your problem and the solution in the Custom DNS Entries For Kubernetes article:
Suppose we have a service, foo.default.svc.cluster.local that is available to outside clients as foo.example.com. That is, when looked up outside the cluster, foo.example.com will resolve to the load balancer VIP - the external IP address for the service. Inside the cluster, it will resolve to the same thing, and so using this name internally will cause traffic to hairpin - travel out of the cluster and then back in via the external IP.
The solution is:
Instead, we want foo.example.com to resolve to the internal ClusterIP, avoiding the hairpin.
To do this in CoreDNS, we make use of the rewrite plugin. This plugin can modify a query before it is sent down the chain to whatever backend is going to answer it.
To get the behavior we want, we just need to add a rewrite rule mapping foo.example.com to foo.default.svc.cluster.local:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name foo.example.com foo.default.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-09T15:02:52Z"
name: coredns
namespace: kube-system
resourceVersion: "8309112"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: a2ef5ff1-141f-11e9-9043-42010a9c0003
Note: In your case, you have to put ingress service name as a destination for the alias.
(E.g.: rewrite name hello.life.com ingress-service-name.ingress-namespace.svc.cluster.local) Make sure you're using correct service name and namespace name.
Once we add that to the ConfigMap via kubectl edit configmap coredns -n kube-system or kubectl apply -f patched-coredns-deployment.yaml -n kube-system, we have to wait 10-15 minutes. Recent CoreDNS versions includes reload plugin.
reload
Name
reload - allows automatic reload of a changed Corefile.
Description
This plugin periodically checks if the Corefile has changed by reading it and calculating its MD5 checksum. If the file has changed, it reloads CoreDNS with the new Corefile. This eliminates the need to send a SIGHUP or SIGUSR1 after changing the Corefile.
The reloads are graceful - you should not see any loss of service when the reload happens. Even if the new Corefile has an error, CoreDNS will continue to run the old config and an error message will be printed to the log. But see the Bugs section for failure modes.
In some environments (for example, Kubernetes), there may be many CoreDNS instances that started very near the same time and all share a common Corefile. To prevent these all from reloading at the same time, some jitter is added to the reload check interval. This is jitter from the perspective of multiple CoreDNS instances; each instance still checks on a regular interval, but all of these instances will have their reloads spread out across the jitter duration. This isn't strictly necessary given that the reloads are graceful, and can be disabled by setting the jitter to 0s.
Jitter is re-calculated whenever the Corefile is reloaded.
Running our test pod, we can see this works:
$ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
If you don't see a command prompt, try pressing enter.
/ # host foo
foo.default.svc.cluster.local has address 10.0.0.72
/ # host foo.example.com
foo.example.com has address 10.0.0.72
/ # host bar.example.com
Host bar.example.com not found: 3(NXDOMAIN)
/ #