Forward internal request to internal DNS with CoreDNS w/ Kubernetes - kubernetes

I'd like to connect to my nextcloud instance using the internal DNS server (CoreDNS) provided by Kubernetes. I am on remotely connected to the cluster through an Wireguard VPN deployed in Kubernetes:
It clearly states that I am using the CoreDNS server 10.43.0.10 used by all other services:
My nextcloud instance is using the traefik ingress controller described in this file:
Putting
10.43.223.221 nextcloud.local
in my /etc/hosts allows me to access the instance but when if I add a line in my Corefile (as seen in the photo below) to route the nextcloud.local to 10.43.223.221 nothing happens.
What should I do to make it work. I want every peer that is connected to that wireguard instance to be able to use those DNS queries.
Thanks!

I managed to solve my problem by following the solution described in CoreDNS do not respect local DNS. I just added this into my corefile:

Related

Clean way to connect to services running on the same host as the Kubernetes cluster

I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>

How to Add Internal DNS Records in Kubernetes

I'm currently setting up a Kubernetes cluster where both private and public services are run. While public services should be accessible via the internet (and FQDNs), private services should not (the idea is to run a VPN inside the cluster where private services should be accessible via simple FQDNs).
At the moment, I'm using nginx-ingress and configure Ingress resources where I set the hostname for public resources. external-dns then adds the corresponding DNS records (in Google CloudDNS) - this already works.
The problem I'm facing now: I'm unsure about how I can add DNS records in the same way (i.e. simply specifying a host in Ingress definitions and using some ingress-class private), yet have these DNS records only be accessible from within the cluster.
I was under the impression that I can add these records to the Corefile that CoreDNS is using. However, I fail to figure out how this can be automated.
Thank you for any help!
If you don't want them to be accessed publicly, you don't want to add ingress rules for them. Ingress is only to route external traffic into your cluster.
All your services are already registered in CoreDNS and accessible with their local name, no need to add anything else.
I managed to resolve the problem myself... wrote a little Go application which watches Ingress resources and adds rewrite rules to the Corefile read by CoreDNS accordingly... works like a charm :)
PS: If anyone wants to use the tool, let me know. I'm happy to make it open-source if there is any demand.
Kubernetes has build-in DNS and each service receives internal fqdn.
These services are not available from the outside unless
service type is 'LoadBalancer'
you define ingress for that service (assuming you have ingress controler like nginx already deployed)
So your sample service deployed in 'default' namespace is accessible inside cluster out of the box via
service1.default.svc.cluster.local
You can change the name by specifying custom ExternalName
apiVersion: v1
kind: Service
metadata:
name: service1
namespace: prod
spec:
type: ExternalName
externalName: service1.database.example.com
Note that no proxying is done for this to work, you need to make sure given new name is routable from within your cluster (outbound connections are allowed, etc.)
As your k8s cluster is hosted with gcloud you can try to use Cloud DNS. There you can add a private zone with your DNS name.
Then you can push this dns server to your client in your vpn configuration with:
push "dhcp-option DOMAIN gitlab.internal.example.com"
push "dhcp-option DNS 169.254.169.254"
169.254.169.254 is googles dns, only accessible from inside a google private network
If you have an internal DNS server that can resolve the FQDNs, then you can configure the Corefile to forward internal service domain resolution to that DNS server.
For example, if the internal domains/FQDN is *.mycompany.local, the Corefile could have a section for that:
mycompany.local {
log
errors
ready
cache 10
forward . <internal DNS server IP> {
}
}
All the requests to app.mycompany.local, or frontend.middleware.backend.mycompany.local will be forward to your internal DNS for resolution.
Documentation of forward plugin is available here: https://coredns.io/plugins/forward/

Access minikube ingress without setting /etc/hosts

Wondering if there is anyway I can discover ingress resources on my host machine without setting a dns entry manually every time in the /etc/hosts file. I also don't want to have to run minikube tunnel or anything like that. If the vm is running on my machine and I can access the ingress with a /etc/hosts entry there should be someway to access the resource without having to go through all that trouble.
Accessing minikube k8s services via enforced by minikube tunnel command External IP address probably is not the good way reaching the nested application endpoints. The approach here is just to assign External IP (origin ClusterIP) to all K8s services exposed with type LoadBalancer as per minikube Load Balancer Controller design.
Assuming, that you've already launched NGINX Ingress controller add-on for the relevant minikube instance, you would be able to expose particular K8s service with type NodePort and point to it in the corresponded Ingress resource as described in this example from K8s tutorial pages, hence you don't need to push tunnel any longer.
According to DNS discovery method, I suppose, that adding domain name and translating it to origin IP address via /etc/hosts file is the most common way, considering that you don't have the particular record for this domain name across DNS resolvers, configured on your Linux machine.

Create an Ingress resource in Minikube

Im new to Ingress(k8s) and studying via documentation. Here is the
official documentation - Ingress Minikube. Under Create and ingress resource, I've already done steps 1-3 with no problem but I can't seem to do what step 4 asks. I have located the file inside the ingress-nginx-controller pod: /etc/hosts; but I can't seem to have any idea on how to edit it. I can't install vim-tiny or any other editing tools due to permission being denied. Sudo does not work. I just want to edit the /etc/hosts file.
This particular step (#4) should be done at your localhost, not inside ingress-controller pod. It`s just for mapping of hostname to IP addresses, so that you can verify if you can reach your application from outside exposed by Ingress resource.
that is ip address or step is to setup DNS A record which expose your application to outside netwrok kubernetes cluster.
ingress > service > POD

Kubernetes: The proxy server is refusing connections

I have started with kubernetes and followed this link to get the response as they mentioned. I followed the exact steps but when I am trying to open the port I get the following error:
How to solve this issue? I have tried by adding the IP address and port in the Browser proxy.
Can anyone help me on this?
Here is the service image: my service image
List of pods: Kubectl Pods
List of kubectl deployments:Deployment List
I believe you're using the baremetal(simple laptop) to deploy your service.
If you have look at my-service it is in pending state and it is of type LoadBalancer. The type load balance is supported only for the cloud providers like aws,azure and google cloud. Hence you are not able to access anything.
I will suggest you to follow this tutorial here which allow you to deploy nginx as a pod and deploy a service around that and export that service as nodeport (without load balancer) to be able to access from outside.