From kubernetes cluster how to have access to external service with host file? - kubernetes

We need to connect to nginx-test.dup.com on port 80 from our kubernetes cluster.
We are using ExternalName but nginx-test.dup.com is only defined in /etc/hosts.
Is there a way to make that service available from within kubernetes cluster? We also tried adding hostNetwork: true
as described in How do I access external hosts from within my cluster?
and we got the following error:
error validating data: ValidationError(Service.spec): unknown field "hostNetwork" in io.k8s.api.core.v1.ServiceSpec
kind: Service
apiVersion: v1
metadata:
name: nginx-test
spec:
ports:
- name: test-https
port: 8080
targetPort: 80
type: ExternalName
externalName: nginx-test.dup.com

CoreDNS doesn't take /etc/hosts into account. You can add the hosts section to the configMap of the CoreDNS manually.
# kubectl edit cm coredns -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
# Add the hosts section from here
hosts {
xxx.xxx.xxx.xxx nginx-test.dup.com
fallthrough
}
# to here
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
...
}
Please note that it will take some time for the new setting to be used.

Related

how to manipulate coredns corefile in rke2 k8s cluster?

I'm using rke2 cluster, i.e. a k8s distribution.
And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the corefile of coredns like below.
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
example.org:53 { #加一个block
errors
cache 30
forward . 10.150.0.1
}
However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile.
How should I achieve this. Thank you a lot.
You map or edit the configmap like
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
YAML for ref
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
name: coredns
namespace: kube-system
Other answers for ref :
https://stackoverflow.com/a/73078010/5525824
https://stackoverflow.com/a/70672297/5525824

How is Kubernetes Service IP assigned and stored?

I deployed a service myservice to the k8s cluster. Using kubectl describe serivce ..., I can find that the service ip is 172.20.127.114 I am trying to figure out how this service ip is assigned. Is it assigned by K8s controller and stored in DNS? How does K8S control decide on the IP range?
kubectl describe service myservice
Name: myservice
Namespace: default
Labels: app=myservice
app.kubernetes.io/instance=myservice
Annotations: argocd.argoproj.io/sync-wave: 3
Selector: app=myservice
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.127.114
IPs: 172.20.127.114
Port: <unset> 80/TCP
TargetPort: 5000/TCP
Endpoints: 10.34.188.30:5000,10.34.89.157:5000
Session Affinity: None
Events: <none>
kuebernetes controller accepts service CIDR range using service-cluster-ip-range parameter. Service IP is assigned from this CIDR block.
The kubernetes controller pod name might vary in each environment. update the pod name accordingly
The Pod IP Addresses comes from CNI
Api-server, Etcd, Kube-Proxy, Scheduler and controller-Manager IP
Addresses come from Server/Node IP Address
Service IP address range is defined in the API Server Configuration
If we check API Configuration, we can see the - --service-cluster-ip-range=10.96.0.0/12 option in command section, A CIDR notation IP range from which to assign service cluster IPs:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
See all defaults configurations:
kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: { }
dns: { }
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: { }
Change Default CIDR IP Range
You can configure Kube API Server with many different options:
when bootstrapping the cluster via kubeadm init --service-cidr <IP Range>
Change kube-apiserver directly (kubelet periodically scans the configurations for changes)
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
Note that with option number 2, you are going to get The connection to the server IP:6443 was refused - did you specify the right host or port?
error for a while, so you have to wait a couple of minutes to kube-apiserver start again...
The new CIDR block only applies for newly created Services, which means
old Services still remain in the old CIDR block, for testing:
kubectl create service clusterip test-cidr-block --tcp 80:80
Then Check the newly created Service...
Credit

Is there a reversal of `ExternalName` service in Kubernetes?

I'm trying to move my development environment to Kubernetes to be more in line with existing deployment stages. In that context I need to call a service by its Ingress DNS name internally, while this DNS name resolves to an IP unreachable from the cluster itself. I would like to create a DNS alias inside the cluster which would point to the service, basically a reversal of a ExternalName service.
Example:
The external DNS name is my-service.my-domain.local, resolving to 127.0.0.1
Internal service is my-service.my-namespace.svc.cluster.local
A process running in a pod can't reach my-service.my-domain.local because of the resolved IP, but could reach my-service.my-namespace.svc.cluster.local, but needs to be accessing the former by name
I would like to have a cluster-internal DNS name my-service.my-domain.local, resolving to the service my-service.my-namespace.svc.cluster.local (ExternalName service would do the exact opposite).
Is there a way to implement this in Kubernetes?
You can use the core dns and add the entry over there using configmap
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
labels:
eks.amazonaws.com/component: coredns
k8s-app: kube-dns
name: coredns
namespace: kube-system
data:
Corefile: |
domain-name:port {
errors
cache 30
forward . <IP or custom DNS>
reload
}
To test you can start one busy box pod
kubectl run busybox --restart=Never --image=busybox:1.28 -- sleep 3600
hit the domain name from inside of busy box
kubectl exec busybox -- nslookup domain-name
Official doc ref : https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Nice article for ref : https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
Or
you can map the domain to the service name using rewrite, rewrite name example.io service.default.svc.cluster.local
Use the Rewrite plug-in of CoreDNS to resolve a specified domain name
to the domain name of a Service.
apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: CoreDNS
release: cceaddon-coredns
name: coredns
namespace: kube-system

Add a simple A record to the CoreDNS service on Kubernetes

Here is the issue:
We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.
within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified.
Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.
To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance)
for instance, I'd like to tell every microk8s environnements that:
192.168.1.4 my-nas.mydomain.local
... like I would within the /etc/hosts file.
Is there a proper way to achieve this?
I tried to follow the advices on this link: Is there a way to add arbitrary records to kube-dns?
(the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work
I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.
Best Regards,
You can set you custom DNS in K8s using the Kube-DNS (Core-DNS)
You have to inject/pass the configuration file as configmap to Core DNS volume.
Configmap will look like
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
You read more about at :
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
https://platform9.com/kb/kubernetes/how-to-customize-coredns-configuration-for-adding-additional-ext
Or else you can also use the external DNS with the Core DNS
You can annotate the service(resource) and external DNS will add the address to core-dns
Read more about it at :
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md
https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deploy-mcp-cluster-using-drivetrain/deploy-k8s/external-dns/verify-external-dns/coredns-etxdns-verify.html
...we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed...
For this you can try headless service without touching coreDNS:
apiVersion: v1
kind: Service
metadata:
name: my-nas
namespace: kube-system # <-- you can place it somewhere else
labels:
app: my-nas
spec:
ports:
- protocol: TCP
port: <nas port>
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-nas
subsets:
- addresses:
- ip: 192.168.1.4
ports:
- port: <nas port>
Use it as: my-nas.kube-system.svc.cluster.local

All Kubernetes proxy targets down - Prometheus Operator

I have a k8s cluster deployed in openstack. I have deployed Prometheus operator for it to monitor the cluster. But I am getting Kubernetes proxy down alert for all the nodes.
I would like to know basics of how Prometheus operator scrapes Kubernetes proxy? also would like to know what configurations needs to be done to fix it.
I can see that kube proxy is running in all nodes at 10249 port.
Error :
Get http://10.8.10.11:10249/metrics: dial tcp 10.8.10.11:10249: connect: connection refused
HELM values configuration
kubeProxy:
enabled: true
## If your kube proxy is not deployed as a pod, specify IPs it can be found on
##
endpoints: []
# - 10.141.4.22
# - 10.141.4.23
# - 10.141.4.24
service:
port: 10249
targetPort: 10249
# selector:
# k8s-app: kube-proxy
serviceMonitor:
## Scrape interval. If not set, the Prometheus default scrape interval is used.
##
interval: ""
## Enable scraping kube-proxy over https.
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
##
https: false
Set the kube-proxy argument for metric-bind-address
$ kubectl edit cm/kube-proxy -n kube-system
...
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249
...
$ kubectl delete pod -l k8s-app=kube-proxy -n kube-system