browsing Istio/k8s services from internet - kubernetes

I have started to read Istio-in-action (by Manning) and Mastering-service-mesh (by Packt) and there are some examples where I cannot 'see' the right output.
I work on my laptop with Ubuntu 20.04 and I use [kind] for my local k8s cluster where I can create 3 or more worker-nodes.
When I deploy some Istio resources (e.g. virtual service) I would like to browse the service-mesh from my Ubuntu browser or from a different client (either a different laptop or cell phone) but it misses something in my 'infrastructure'- is it the external load balancer or some local Ubuntu configuration? Is it mandatory to work with a public cloud provider - GCP/AWS/Azure ; if Yes, which one is the most simple? I have tried with kubectl port-forward but without success.
Other resources are ok (e.g. istioctl dashboard kiali/jaeger/prometheus) even without an ExternalIP.
Could you help me to find a blog or a tutorial/hint/advice about the necessary elements for browsing the k8s/Istio services from the internet?
Thank you in advance!

When installing istio with the istio-ingressgateway enabled a service with that name is created in the istio-system namespace.
❯ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 100.71.98.21 <pending> 80:32564:80/TCP,...
When deploying istio to a public cloud provider, that will create a load balancer (like AWS ELB) for you. When the setup is done the EXTERNAL-IP will switch from <pending> to an actual ip, the public ip of the load balancer. You can access your cluster by visiting that ip.
On your local setup you don't have this luxury. But the service still is created. In the PORT(S) column you can see a bunch of ports. That is actually a port mapping. So ports of your node machine are being mapped to that service.
You use this to get the port mapped to http (port 80): For me it would be the 32564. Or you can run this:
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}'
Now just open your browser and use one of your worker's ip to access the cluster by visiting <NODE_IP>:<PORT> (where PORT is the one from above).
See docs

Related

expose Istio-gateway on port 80

I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.

Burrow Dashboard UI not showing up

I have modified Burrow charts available at https://github.com/Yolean/kubernetes-kafka/tree/master/linkedin-burrow
Things are working fine.
I have port-forwarded my burrow deployment to localhost:8000
When I hit the API endpoints, I am receiving the correct output.
However the Burrow dashboard API is not coming up.
How to get the UI?
Attaching screenshot for reference
Attaching kubernetes deployment details as well
Create a service object that exposes your deployment:
$ kubectl expose deployment your-deployment --type=LoadBalancer --name=your-service
Check some information about the Service:
$ kubectl get services your-service
The output should be similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
your-service LoadBalancer x.y.a.b c.d.e.f 8080/TCP 10s
If the external IP address is in status, wait a while and execute the same command again.
To get to Burrow UI you need to define IP and add them to host file (on Linux is /etc/hosts)
vi /etc/hosts
your_borrow_external_ip www.preffered-name-of-site.com
Egg:
vi /etc/hosts
10.107.12.12 www.example.com
Then use the external IP address (LoadBalancer Ingress) to access the your application:
http://<external-ip>:<port>
More information you can find here: exposing-application.
I hope it helps.

How to expose service outside k8s cluster?

I have run a Hello World application using the below command.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Created a service as below
kubectl expose deployment hello-world --type=NodePort --name=example-service
The pods are running
NAME READY STATUS RESTARTS AGE
hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s
hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s
Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.XX.XX.XX <none> 8080:32023/TCP 66s
Here, I am able to test it through curl inside the cluster.
curl http://10.XX.XX.XX:8080
Hello Kubernetes!
How can I access this service outside my cluster? (example, through laptop browser)
you shoud try
http://IP_OF_KUBERNETES:32023
IP_OF_KUBERNETES can be your master IP your worker IP
when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2
and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by
http://IP1:32023
http://IP2:32023
You should be able to access it outside the cluster using NodePort assingned(32023). Please paste following http://<IP>:<Port> in your browser and you will able to access your app:
http://<MASTER/WORKER_IP>:32023
There are answers already provided, but I felt like this topic needed some consolidation.
This seems to be fairly easy. NodePort actually exposes your application as the name says on the port of each node. So all you have to do is just find the IP address of the Node on which the pod is. You can do it by running:
kubectl get pods -o wide so you can find the IP or name of the node on which the pod is, then just follow what previous answers state: so http://<MASTER/WORKER_IP>:PORT
There is more methods:
You can deploy Ingress Controller and configure Ingress so the application will be reachable through the internet.
You can also use kubectl proxy to expose ClusterIP service outside of the cluster. Like in this example with Dashboard.
Another way is to use LoadBalancer type, which requires underlying cloud infrastructure.
If you are using minikube you can try to run minikube service list to check your exposed services and their IP.
You can access your service using MasterIP or WorkerIP. If you are planning to use it in production or in a more reliable way you should create a service with type LoadBalancer. And use load balancers IP to access it.
If you are using any cloud env, make sure the firewall rules allow incoming traffic.
This will take care of redirecting request to which ever node the pod is running on. Else you will have to manually hit masterIP or workerIP depending on where the pod is running. If the pod gets moved to different node, you will have to change the ip you are hitting
Service Load Balancer

For pod-to-pod communication, what IP should be used? The service's ClusterIP or the endpoint

I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.

External IP assignment with Minihube ingress add-on enabled

For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
kubectl get services --watch
I want to achieve the same with Minikube (if possible).
Here is how I try to set things up locally on my OSX development machine:
minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
I see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
External-Ip field never gets updated and shows pending phase. Is it possible to achieve external IP assignment with Minikube?
On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.
To have the same on minikube you'd have to run some kind of an LB controller, ie. haproxy one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by minikube ip so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.
Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.
If you want to emulate external IP assignment event (like the one you can observe using GKE or AWS), this can be achieved by applying the following patch on your sandbox kubernetes:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
https://github.com/elsonrodriguez/minikube-lb-patch#assigning-external-ips