How to assign an IP to istio-ingressgateway on localhost? - kubernetes

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.

First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.

By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP

It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).

If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s

Related

Unable to reach service/API from outside the cluster - Kubernetes (Metallb+HAProxy Ingress Controller)

I've created a bare-metal multi-master k8s cluster using kubekey.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23h v1.23.10
master2 Ready control-plane,master 23h v1.23.10
master3 Ready control-plane,master 23h v1.23.10
worker1 Ready worker 23h v1.23.10
worker2 Ready worker 23h v1.23.10
worker3 Ready worker 23h v1.23.10
$ curl localhost:10249/healthz
ok
Added MetalLB load balancer and HAProxy Ingress Controller. The haproxy-controller gets the external IP address from the Metallb correctly:
$ kubectl get svc -n haproxy-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-kubernetes-ingress LoadBalancer 10.233.59.120 10.30.2.81 80:32244/TCP,443:30908/TCP,1024:32666/TCP 21h
Deployed a microservice, and exposed the service via ingress:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 23h
ms-login-http ClusterIP 10.233.3.180 <none> 80/TCP 21h
$ kubectl describe ing
Name: ms-login-http
Labels: <none>
Namespace: default
Address: 10.30.2.81
Default backend: ms-login-http:80 (10.233.103.1:8080)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.in
/api/sc ms-login-http:80 (10.233.103.1:8080)
Annotations: haproxy.org/load-balance: roundrobin
haproxy.org/src-ip-header: True-Client-IP
Events: <none>
The issue is reachability of the deployed API:
[✓] Accessing the API from within any of the cluster nodes works fine
$ curl api.mydomain.in/api/sc/healthcheck
success
[✕] Same API from outside the cluster nodes fails
$ curl api.mydomain.in/api/sc/healthcheck
curl: (7) Failed to connect to api.mydomain.in port 80 after 0 ms: Connection refused
Seem to be firewall issue, but unable to narrow down for what maybe blocking the traffic. The IPTables on the master nodes has several calico forward rules. The rules list is shared in this gist.
Any direction/insight would greatly help, as I'm missing something basic here. Not faced this issue when I created a similar cluster some months back. Seems the latest version of calico has something to do with it.

I get multiple Ip with my ingress ressource and don't know why or how to fix it?

Hello guys I have an ingress controller running and i deployed an ingress for kafka (deployed through strimzi), but the ingress is showing me multiples Ip for the address, instead of one, so I'd like to know why and what can I do to fix it cuz from what I've seen in tutorials , whent you have an ingress the Ip given in the address is the same as th on in the ingress controller service ( in my case it should be : 172.24.20.195) so here is the ingres controller components :
root#di-admin-general:/home/lda# kubectl get all -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/default-http-backend-598b7d7dbd-ggghv 1/1 Running 2 6d3h 192.168.129.71 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-4rdxd 1/1 Running 2 6d3h 172.24.20.8 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-g6d2f 1/1 Running 2 6d3h 172.24.20.242 pr-k8s-fe-fastdata-worker-01 <none> <none>
pod/nginx-ingress-controller-r995l 1/1 Running 2 6d3h 172.24.20.38 pr-k8s-fe-fastdata-worker-03 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/default-http-backend ClusterIP 192.168.42.107 <none> 80/TCP 6d3h app=default-http-backend
service/nginx-ingress-controller LoadBalancer 192.168.113.157 172.24.20.195 80:32641/TCP,443:32434/TCP 163m workloadID_nginx-ingress-controller=true
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/nginx-ingress-controller 3 3 3 3 3 <none> 6d3h nginx-ingress-controller rancher/nginx-ingress-controller:nginx-0.35.0-rancher2 app=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/default-http-backend 1/1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/default-http-backend-598b7d7dbd 1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend,pod-template-hash=598b7d7dbd
root#di-admin-general:/home/lda#
and here are the kafka part:
root#di-admin-general:/home/lda# kubectl get ingress -n kafkanew -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
kafka-ludo-kafka-0 <none> broker-0.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-1 <none> broker-1.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-2 <none> broker-2.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-bootstrap <none> bootstrap.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
you see that we have 3 ips (172.24.20.242,172.24.20.38,172.24.20.8) instead of just one that from what I think should be 172.24.20.195, please if anyone can provide explanations, the link to the strimzi yaml used to expose an ingress is here : https://developers.redhat.com/blog/2019/06/12/accessing-apache-kafka-in-strimzi-part-5-ingress/
thank you for your help
Your external IP is the one you see in kubectl get svc, in your case 172.24.20.195.
The other IPs you see in kubectl get ingress are ingress controller pod IPs, which are internal to your cluster.

How to get FQDN DNS name of a kubernetes service?

How to get a full FQDN of the service inside Kubernetes?
➜ k get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 20d app=edna-airflow
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 20d app=edna-airflow
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 20d app=edna-backend
so how to query internal Kubernetes DNS to get the FQDN of the backend-service for example?
Go inside any pod in the same namespace with kubectl exec -ti <your pod> bash and then run nslookup <your service> which will typically be, unless you change some configurations in the cluster to: yourservice.yournamespace.svc.cluster.local

Kubernetes service created via exposed deployment is not responding to curl

I deployed my application using deployment construct. State of my pod is Running and making curl against pod's IP returns application content. However when I created service using kubectl expose deployment and I curl service's IP then curl throws Connection refused error. Why is that?
My pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cge-frontend-5d4595469b-qvcsd 0/1 Running 0 19s 10.40.0.4 compute04 <none> <none>
My service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cge-frontend ClusterIP 10.98.212.184 <none> 80/TCP 16m
Error
$ curl 10.98.212.184
curl: (7) Failed connect to 10.98.212.184:80; Connection refused
After investigating my service with kubectl describe svc command. I fogure out that my service has no Endpoints - endpoints section should list pod's IP.
$ kubectl describe svc cge-frontend
Name: cge-frontend
Namespace: default
Labels: app=cge-frontend
Annotations: <none>
Selector: app=cge-frontend
Type: ClusterIP
IP: 10.98.212.184
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints:
Session Affinity: None
It turned out that, the error was caused by one of my probe that was keeping my pod in Running state but not in Readystate. Fixing probes, fixed my pods, and that fixed the service.
My pod after fixing probes is now in correct state READY 1/1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cge-frontend-5d4595469b-qvcsd 1/1 Running 0 19s 10.40.0.5 compute04 <none> <none>

ingress-nginx No IP Address

I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.