Create an Ingress resource in Minikube - kubernetes

Im new to Ingress(k8s) and studying via documentation. Here is the
official documentation - Ingress Minikube. Under Create and ingress resource, I've already done steps 1-3 with no problem but I can't seem to do what step 4 asks. I have located the file inside the ingress-nginx-controller pod: /etc/hosts; but I can't seem to have any idea on how to edit it. I can't install vim-tiny or any other editing tools due to permission being denied. Sudo does not work. I just want to edit the /etc/hosts file.

This particular step (#4) should be done at your localhost, not inside ingress-controller pod. It`s just for mapping of hostname to IP addresses, so that you can verify if you can reach your application from outside exposed by Ingress resource.

that is ip address or step is to setup DNS A record which expose your application to outside netwrok kubernetes cluster.
ingress > service > POD

Related

How to configure nginx ingress rules without "host"

I have installed nginx ingress in kubernetes from official documenation. But while configuring the rules without mentioning the "host". I am getting the below erros.
error
++++++
spec.rules[0].host: Required value
Is it possible to configure it without host as I want to access it using only IP address
and I also found the below deployment file with which I am able to apply rules without "host". But not sure is this is safe to use. Please guide me here
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
Do you mean to configure the ingress? The ingress controller is different from ingress itself. If you are configuring ingress, then host is completely optional. If host is omitted, all the http traffic is directed through IP address by default. Refer to this documentation for more info https://kubernetes.io/docs/concepts/services-networking/ingress/

Forward internal request to internal DNS with CoreDNS w/ Kubernetes

I'd like to connect to my nextcloud instance using the internal DNS server (CoreDNS) provided by Kubernetes. I am on remotely connected to the cluster through an Wireguard VPN deployed in Kubernetes:
It clearly states that I am using the CoreDNS server 10.43.0.10 used by all other services:
My nextcloud instance is using the traefik ingress controller described in this file:
Putting
10.43.223.221 nextcloud.local
in my /etc/hosts allows me to access the instance but when if I add a line in my Corefile (as seen in the photo below) to route the nextcloud.local to 10.43.223.221 nothing happens.
What should I do to make it work. I want every peer that is connected to that wireguard instance to be able to use those DNS queries.
Thanks!
I managed to solve my problem by following the solution described in CoreDNS do not respect local DNS. I just added this into my corefile:

Kubernetes nginx ingress controller not working

I'm using minikube to have a local Kubernetes cluster.
I enabled the ingress addon with the following command :
minikube addons enable ingress
I have created an application that turns actually in my cluster and also a service tied to it plus an ingress resource where mydomain.name will forward traffic to that service.
The issue that I have: this will not work if I don't add an entry like the following one to the /etc/hosts file :
<kubernetes_cluster_ip_adress> mydomain.name
I thought having the ingress controller with an ingress resource would do the DNS resolution & that I don't need to add any entry to the /etc/hosts file but maybe I am wrong.
Thank you for your help

Access minikube ingress without setting /etc/hosts

Wondering if there is anyway I can discover ingress resources on my host machine without setting a dns entry manually every time in the /etc/hosts file. I also don't want to have to run minikube tunnel or anything like that. If the vm is running on my machine and I can access the ingress with a /etc/hosts entry there should be someway to access the resource without having to go through all that trouble.
Accessing minikube k8s services via enforced by minikube tunnel command External IP address probably is not the good way reaching the nested application endpoints. The approach here is just to assign External IP (origin ClusterIP) to all K8s services exposed with type LoadBalancer as per minikube Load Balancer Controller design.
Assuming, that you've already launched NGINX Ingress controller add-on for the relevant minikube instance, you would be able to expose particular K8s service with type NodePort and point to it in the corresponded Ingress resource as described in this example from K8s tutorial pages, hence you don't need to push tunnel any longer.
According to DNS discovery method, I suppose, that adding domain name and translating it to origin IP address via /etc/hosts file is the most common way, considering that you don't have the particular record for this domain name across DNS resolvers, configured on your Linux machine.

Resolve custom dns in kubernetes cluster (AKS)

We currently have pods in a kubernetes cluster (AKS) that need to resolve two different domains.
The first domain beeing the cluster domain default.svc.cluster.local and the second one beeing mydns.local
how can this be achieved?
I found the solution myself.
There are two ways to achieve the desired name resolution:
If your AKS Cluster is within an Azure VNET you can set the DNS settings in the VNET to the custom DNS Server that is able to resolve your custom domain. If your Pods have no specified dns settings then the resolution will work this way:
First the Pods try to resolve the DNS request within CoreDNS, if they can't then they take the DNS settings of the host and ask the DNS Server configured in the host. Since in azure the DNS settings of the VNET are applied to the Virtual Machines it will ask the correct DNS server.
Modify the coreDNS settings in your AKS cluster with the following json :
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
yourdns.server: |
yourdns.com:53 {
errors
cache 1
proxy . 10.1.0.40
}
Important to know is, that in AKS you can't overwrite the coredns ConfigMap. The Kubernetes master will always reset it to the default after a couple of seconds. If you want to edit the ConfigMap in AKS you have to name the configmap "coredns-custom".
yourdns.server is actually not the server. It is the domain.server. The DNS server IP is behind the proxy setting.
I think you can use ingress and ingress controller to manage the domain and path.with ingress you can manage multiple domain and attch service to particular domain.
https://kubernetes.github.io/ingress-nginx/
Here also sharing tutorial to setup ingress from digital ocean you can follow it :
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
Your second point "2.Modify the coreDNS settings in your AKS cluster with the following json :"
Note that the "forward" plugin should be used in place of "proxy" as noted here:
https://github.com/Azure/AKS/issues/1304