GCP Kubernetes Nodes update with Internal Private Nameservers - kubernetes

We have a docker image repository on GitLab which is hosted on the internal network ( repo.mycomapanydomain.io).
My K8 deployment is failing with Name not resolved error for repo.mycomapanydomain.io
I tried updating the kube-dns config as below. But I still have the same error.
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{“mycomapanydomain”: [“10.131.0.4”]}
upstreamNameservers: |
[“10.131.0.4”]
How can I make my resolv.conf to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?

Editing /etc/resolv.conf either manually or automatically is discouraged as for:
Internal DNS and resolv.conf
By default, most Linux distributions store DHCP information in resolv.conf. Compute Engine instances are configured to renew DHCP leases every 24 hours. For instances that are enabled for zonal DNS, the DHCP lease expires every hour. DHCP renewal overwrites this file, undoing any changes that you might have made. Instances using zonal DNS have both zonal and global entries in the resolv.conf file.
-- Cloud.google.com: Compute: Docs: Internal DNS: resolv.conf
Also:
Modifications on the boot disk of a node VM do not persist across node re-creations. Nodes are re-created during manual upgrade, auto-upgrade, auto-repair, and auto-scaling. In addition, nodes are re-created when you enable a feature that requires node re-creation, such as GKE sandbox, intranode visibility, and shielded nodes.
-- Cloud.google.com: Kubernetes Engine: Docs: Concepts: Node images: Modifications
As for:
How can I make my resolv.conf to have the Internal nameservers by default or K8 to resolve with my internal DNS IPs?
From the GCP and GKE perspective, you can use the Cloud DNS to configure your DNS resolution in either way that:
your whole DOMAIN is residing in GCP infrastructure (and you specify all the records).
your DOMAIN queries are forwarded to the DNS server of your choosing.
You can create your DNS zone by following:
GCP Cloud Console (Web UI) -> Network Services -> Cloud DNS -> Create zone:
Assuming that you want to forward your DNS queries to your internal DNS server residing in GCP your configuration should look similar to the one below:
A side note!
Remember to follow the "Destination DNS Servers" steps to allow the DNS queries to your DNS server.
Put the internal IP address of your DNS server where the black rectangle is placed.
After that your GKE cluster should be able to resolve the DNS queries of your DOMAIN.NAME.
Additional resources:
I found an article that shows how you can create a DNS forwarding for your GCP instances:
Medium.com: Faun: DNS forwarding zone and dns policy in GCP

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

How to Add Internal DNS Records in Kubernetes

I'm currently setting up a Kubernetes cluster where both private and public services are run. While public services should be accessible via the internet (and FQDNs), private services should not (the idea is to run a VPN inside the cluster where private services should be accessible via simple FQDNs).
At the moment, I'm using nginx-ingress and configure Ingress resources where I set the hostname for public resources. external-dns then adds the corresponding DNS records (in Google CloudDNS) - this already works.
The problem I'm facing now: I'm unsure about how I can add DNS records in the same way (i.e. simply specifying a host in Ingress definitions and using some ingress-class private), yet have these DNS records only be accessible from within the cluster.
I was under the impression that I can add these records to the Corefile that CoreDNS is using. However, I fail to figure out how this can be automated.
Thank you for any help!
If you don't want them to be accessed publicly, you don't want to add ingress rules for them. Ingress is only to route external traffic into your cluster.
All your services are already registered in CoreDNS and accessible with their local name, no need to add anything else.
I managed to resolve the problem myself... wrote a little Go application which watches Ingress resources and adds rewrite rules to the Corefile read by CoreDNS accordingly... works like a charm :)
PS: If anyone wants to use the tool, let me know. I'm happy to make it open-source if there is any demand.
Kubernetes has build-in DNS and each service receives internal fqdn.
These services are not available from the outside unless
service type is 'LoadBalancer'
you define ingress for that service (assuming you have ingress controler like nginx already deployed)
So your sample service deployed in 'default' namespace is accessible inside cluster out of the box via
service1.default.svc.cluster.local
You can change the name by specifying custom ExternalName
apiVersion: v1
kind: Service
metadata:
name: service1
namespace: prod
spec:
type: ExternalName
externalName: service1.database.example.com
Note that no proxying is done for this to work, you need to make sure given new name is routable from within your cluster (outbound connections are allowed, etc.)
As your k8s cluster is hosted with gcloud you can try to use Cloud DNS. There you can add a private zone with your DNS name.
Then you can push this dns server to your client in your vpn configuration with:
push "dhcp-option DOMAIN gitlab.internal.example.com"
push "dhcp-option DNS 169.254.169.254"
169.254.169.254 is googles dns, only accessible from inside a google private network
If you have an internal DNS server that can resolve the FQDNs, then you can configure the Corefile to forward internal service domain resolution to that DNS server.
For example, if the internal domains/FQDN is *.mycompany.local, the Corefile could have a section for that:
mycompany.local {
log
errors
ready
cache 10
forward . <internal DNS server IP> {
}
}
All the requests to app.mycompany.local, or frontend.middleware.backend.mycompany.local will be forward to your internal DNS for resolution.
Documentation of forward plugin is available here: https://coredns.io/plugins/forward/

Resolve custom dns in kubernetes cluster (AKS)

We currently have pods in a kubernetes cluster (AKS) that need to resolve two different domains.
The first domain beeing the cluster domain default.svc.cluster.local and the second one beeing mydns.local
how can this be achieved?
I found the solution myself.
There are two ways to achieve the desired name resolution:
If your AKS Cluster is within an Azure VNET you can set the DNS settings in the VNET to the custom DNS Server that is able to resolve your custom domain. If your Pods have no specified dns settings then the resolution will work this way:
First the Pods try to resolve the DNS request within CoreDNS, if they can't then they take the DNS settings of the host and ask the DNS Server configured in the host. Since in azure the DNS settings of the VNET are applied to the Virtual Machines it will ask the correct DNS server.
Modify the coreDNS settings in your AKS cluster with the following json :
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
yourdns.server: |
yourdns.com:53 {
errors
cache 1
proxy . 10.1.0.40
}
Important to know is, that in AKS you can't overwrite the coredns ConfigMap. The Kubernetes master will always reset it to the default after a couple of seconds. If you want to edit the ConfigMap in AKS you have to name the configmap "coredns-custom".
yourdns.server is actually not the server. It is the domain.server. The DNS server IP is behind the proxy setting.
I think you can use ingress and ingress controller to manage the domain and path.with ingress you can manage multiple domain and attch service to particular domain.
https://kubernetes.github.io/ingress-nginx/
Here also sharing tutorial to setup ingress from digital ocean you can follow it :
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
Your second point "2.Modify the coreDNS settings in your AKS cluster with the following json :"
Note that the "forward" plugin should be used in place of "proxy" as noted here:
https://github.com/Azure/AKS/issues/1304

What is the reason of creating LoadBalancer when create cluster using kops

I tried to create k8s cluster on aws using kops.
After create the cluster with default definition, I saw a LoadBalance has been created.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: bungee.staging.k8s.local
spec:
api:
loadBalancer:
type: Public
....
I just wondering about the reason of creating the LoadBalancer along with cluster.
Appreciate !
In the type of cluster that kops creates the apiserver (referred to as api above, a component of the Kubernetes master, aka control plane) may not have a static IP address. Also, kops can create a HA (replicated) control plane, which means there will be multiple IPs where the apiserver is available.
The apiserver functions as a central connection hub for all other Kubernetes components, for example all the nodes connect to it but also the operator humans connect to them via kubectl. For one, these configuration files do not support multiple IP address for the apiserver (as to make use of the HA setup). Plus updating the configuration files every time the apiserver IP address(es) change would be difficult.
So the load balancer functions as a front for the apiserver(s) with a single, static IP address (an anycast IP with AWS/GCP). This load balancer IP is specified in the configuration files of Kubernetes components instead of actual apiserver IP(s).
Actually, it is also possible to solve this program by using a DNS name that resolves to IP(s) of the apiserver(s) coupled with a mechanism that keeps this record updated. This solution can't react to changes of the underlying IP(s) as fast a load balancer can, but it does save you couple of bucks plus it is slightly less likely to fail and creates less dependency on the cloud provider. This can be configured like so:
spec:
api:
dns: {}
See specification for more details.

Kubernetes custom domain name

I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.