How to get FQDN DNS name of a kubernetes service? - kubernetes

How to get a full FQDN of the service inside Kubernetes?
➜ k get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 20d app=edna-airflow
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 20d app=edna-airflow
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 20d app=edna-backend
so how to query internal Kubernetes DNS to get the FQDN of the backend-service for example?

Go inside any pod in the same namespace with kubectl exec -ti <your pod> bash and then run nslookup <your service> which will typically be, unless you change some configurations in the cluster to: yourservice.yournamespace.svc.cluster.local

Related

Unable to reach service/API from outside the cluster - Kubernetes (Metallb+HAProxy Ingress Controller)

I've created a bare-metal multi-master k8s cluster using kubekey.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23h v1.23.10
master2 Ready control-plane,master 23h v1.23.10
master3 Ready control-plane,master 23h v1.23.10
worker1 Ready worker 23h v1.23.10
worker2 Ready worker 23h v1.23.10
worker3 Ready worker 23h v1.23.10
$ curl localhost:10249/healthz
ok
Added MetalLB load balancer and HAProxy Ingress Controller. The haproxy-controller gets the external IP address from the Metallb correctly:
$ kubectl get svc -n haproxy-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-kubernetes-ingress LoadBalancer 10.233.59.120 10.30.2.81 80:32244/TCP,443:30908/TCP,1024:32666/TCP 21h
Deployed a microservice, and exposed the service via ingress:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 23h
ms-login-http ClusterIP 10.233.3.180 <none> 80/TCP 21h
$ kubectl describe ing
Name: ms-login-http
Labels: <none>
Namespace: default
Address: 10.30.2.81
Default backend: ms-login-http:80 (10.233.103.1:8080)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.in
/api/sc ms-login-http:80 (10.233.103.1:8080)
Annotations: haproxy.org/load-balance: roundrobin
haproxy.org/src-ip-header: True-Client-IP
Events: <none>
The issue is reachability of the deployed API:
[✓] Accessing the API from within any of the cluster nodes works fine
$ curl api.mydomain.in/api/sc/healthcheck
success
[✕] Same API from outside the cluster nodes fails
$ curl api.mydomain.in/api/sc/healthcheck
curl: (7) Failed to connect to api.mydomain.in port 80 after 0 ms: Connection refused
Seem to be firewall issue, but unable to narrow down for what maybe blocking the traffic. The IPTables on the master nodes has several calico forward rules. The rules list is shared in this gist.
Any direction/insight would greatly help, as I'm missing something basic here. Not faced this issue when I created a similar cluster some months back. Seems the latest version of calico has something to do with it.

kubectl patch returning service not found

I have deployed pihole on my k3s cluster using this helm chart https://github.com/MoJo2600/pihole-kubernetes.
(I used this tutorial)
I now have my services but they dont have external IPs:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-web ClusterIP 10.43.58.197 <none> 80/TCP,443/TCP 11h
pihole-dns-udp NodePort 10.43.248.252 <none> 53:30451/UDP 11h
pihole-dns-tcp NodePort 10.43.248.144 <none> 53:32260/TCP 11h
pihole-dhcp NodePort 10.43.96.49 <none> 67:30979/UDP 11h
I have tried to assing the IPs manually with this command:
kubectl patch svc pihole-dns-tcp -p '{"spec":{"externalIPs":["192.168.178.210"]}}'
But when executing the command i'm getting this error:
Error from server (NotFound): services "pihole-dns-tcp" not found
Any Ideas for a fix?
Thank you in advance :)
Looks Like "pihole-dns-tcp" is in a different namespace to the namespace where patch command is being ran.
As per the article you have shared , it seems like service pihole-dns-tcp is in pihole . So the command should be
kubectl patch svc pihole-dns-tcp -n pihole -p '{"spec":{"externalIPs":["192.168.178.210"]}}'

how to make already exposed service not to be exposed?

I deployed a (LoadBalancer) service for a pod on my minikube cluster, then exposed it via minikube service [my_service] command. Now I tried to "turn off the exposure" but couldn't find any way to do this, except deleting it what I would like to avoid. Is it possible to just turn off the exposure, not deleting (and redeploying) the existing already exposed service?
Background
In Kubernetes documentation regarding ServiceTypes, you can find information that if you want to expose your cluster outside you have to use NodePort or LoadBalancer.
If you want to keep your service/application in cluster, you should use ClusterIP:
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
Depends on your version, you can edit it or use workaround. For example in K8s 1.16 you won't be able as some errors might occurs.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 3d20h
my-nginx LoadBalancer 10.8.14.224 34.121.77.108 80:32039/TCP 3m16s
$ kubectl patch service my-nginx -p '{"spec":{"type":"ClusterIP"}}'
The Service "my-nginx" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP'
Solutions
However as you are using Minikube so I guess you are using newer version (1.20), so you can change it using:
kubectl edit
kubectl edit svc <yourSvcName> and change type to ClusterIP.
It will open Vi editor, where you can change spec.type to ClusterIP or just remove it as default type for Kubernetes service is ClusterIP so if it won't be specified, Kubernetes will automatically use ClusterIP.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25s
my-nginx LoadBalancer 10.107.129.201 <pending> 80:30173/TCP 8s
minikube-new:~$ kubectl edit svc my-nginx
service/my-nginx edited
sekreta#minikube-new:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46s
my-nginx ClusterIP 10.107.129.201 <none> 80/TCP 29s
kubectl patch
$ kubectl patch service <yourServiceName> -p '{"spec":{"type":"ClusterIP"}}'
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
my-nginx LoadBalancer 10.107.129.201 <pending> 80:30456/TCP 2m
minikube-new:~$ kubectl patch service my-nginx -p '{"spec":{"type":"ClusterIP"}}'
service/my-nginx patched
minikube-new:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
my-nginx ClusterIP 10.107.129.201 <none> 80/TCP 3m
kubectl apply
You can edit your Yaml and remove spec.type or have 2 Yamls with ClusterIP and LoadBalancer and switch them depends on your needs.
$ kubectl apply -f svc.yaml
service/my-nginx configured
minikube-new:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22m
my-nginx ClusterIP 10.111.237.133 <none> 80/TCP 16s
Use some 3rd party software to make changes in your cluster like Helm and using templates.

How to assign an IP to istio-ingressgateway on localhost?

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s

Kubernetes 1.13, CoreDNS - cluster curl service?

by default, in Kubernetes 1.13 CoreDNS is installed.
Can you please tell me how to make a curl in a cluster by the name of the service?
[root#master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 24h
[root#master ~]# kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 21h
tools nexus-svc NodePort 10.233.17.152 <none> 8081:31991/TCP,5000:31111/TCP,8083:31081/TCP,8082:31085/TCP 14h
[root#master ~]# kubectl describe services nexus-svc --namespace=tools
Name: nexus-svc
Namespace: tools
Labels: tools=nexus
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"tools":"nexus"},"name":"nexus-svc","namespace":"tools"},"spec"...
Selector: tools=nexus
Type: NodePort
IP: 10.233.17.152
Port: http 8081/TCP
.....
So I get the correct answer.
[root#master ~]# curl http://10.233.17.152:8081
<!DOCTYPE html>
<html lang="en">
<head>
<title>Nexus Repository Manager</title>
....
And so no.
[root#master ~]# curl http://nexus-svc.tools.svc.cluster.local
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
[root#master ~]# curl http://nexus-svc.tools.svc.cluster.local:8081
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
Thanks.
coredns or kubedns are meant to resolve the service name to its clusterIP (normal service) or correspondent Pod IP (headless service) inside the kubernetes cluster not outside. You are trying to curl the service name on the node, not inside the pod and hence it is not able to resolve the service name to its clusterIP.
YOu can go inside the pod and try following:
kubectl exec -it <pod_name> bash
nslookup nexus-svc.tools.svc.cluster.local
It will return you cluster IP and it means coredns is working fine. If your pod has curl utility then you can also curl it using service name (but from inside the cluster only)
If you want to access the service from outside the cluster, this service already exposed as NodePort so you can access it using:
curl http://<node_ip>:31991
Hope this helps.