How to expose service outside k8s cluster? - kubernetes

I have run a Hello World application using the below command.
kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Created a service as below
kubectl expose deployment hello-world --type=NodePort --name=example-service
The pods are running
NAME READY STATUS RESTARTS AGE
hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s
hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s
Service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.XX.XX.XX <none> 8080:32023/TCP 66s
Here, I am able to test it through curl inside the cluster.
curl http://10.XX.XX.XX:8080
Hello Kubernetes!
How can I access this service outside my cluster? (example, through laptop browser)

you shoud try
http://IP_OF_KUBERNETES:32023
IP_OF_KUBERNETES can be your master IP your worker IP
when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2
and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by
http://IP1:32023
http://IP2:32023

You should be able to access it outside the cluster using NodePort assingned(32023). Please paste following http://<IP>:<Port> in your browser and you will able to access your app:
http://<MASTER/WORKER_IP>:32023

There are answers already provided, but I felt like this topic needed some consolidation.
This seems to be fairly easy. NodePort actually exposes your application as the name says on the port of each node. So all you have to do is just find the IP address of the Node on which the pod is. You can do it by running:
kubectl get pods -o wide so you can find the IP or name of the node on which the pod is, then just follow what previous answers state: so http://<MASTER/WORKER_IP>:PORT
There is more methods:
You can deploy Ingress Controller and configure Ingress so the application will be reachable through the internet.
You can also use kubectl proxy to expose ClusterIP service outside of the cluster. Like in this example with Dashboard.
Another way is to use LoadBalancer type, which requires underlying cloud infrastructure.
If you are using minikube you can try to run minikube service list to check your exposed services and their IP.

You can access your service using MasterIP or WorkerIP. If you are planning to use it in production or in a more reliable way you should create a service with type LoadBalancer. And use load balancers IP to access it.
If you are using any cloud env, make sure the firewall rules allow incoming traffic.
This will take care of redirecting request to which ever node the pod is running on. Else you will have to manually hit masterIP or workerIP depending on where the pod is running. If the pod gets moved to different node, you will have to change the ip you are hitting
Service Load Balancer

Related

k3d no external ip for a service of load balanacer type

i am deploying the hello-world docker container to a k3d - cluster.
To get the external IP, a service of the type - load balancer is deployed.
After that i was hoping to call the appication via load balancer. But i don't get the external ip.
k3d create --name="mydemocluster" --workers="2" --publish="80:80"
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
kubectl run kubia --image=hello-world --port=8080 --generator=run/v1
kubectl expose rc kubia --type=LoadBalancer --name kubia-http
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
then kubectl get services:
LoadBalancer type service will get external IP only if you use a managed kubernetes Service provided by cloud providers such as AWS EKS, Azure AKS, Google GCP etc.Tools such as k3d is for local development and if you create a LoadBalancer type service external ip will be pending. Alternative is to use NodePort type service or ingress . Here is the doc on this.
Also you can use kubectl port forward or kubectl proxy to access the pod.
I was following this example
with k3d and there it seems to work fine:
(base) erik#buzzard:~/kubernetes/tutorial>
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d6h
mongodb-service ClusterIP 10.43.215.113 <none> 27017/TCP 27m
mongo-express-service LoadBalancer 10.43.77.100 172.20.0.2 8081:30000/TCP 27m
As I understand, k3d is running k3s which is more of a full kubernetes setup than minikube for instance. I can access the service at http://172.20.0.2:8081 without problems.
You'll need a cloud controller manager to act as a service controller to do that. As far as on-prem goes, your best option is likely MetalLB.
That being said, I don't know how that will behave with the underlying docker network in K3d. It's on my list of things to try out. If I find it works well, I'll come back and update this post.
I solved this by changing my manifest from a LoadBalancer type to an Ingress type. K3d doesn't seem to expose external IP's properly to a load balancer type.
Oddly, I did find I was able to get the LoadBalancer type to work if I deployed really quickly. It seemed it had to be after the master node was up and before any agents were up.

How do I use minikube's DNS?

How do I use minikube's (cluster's) DNS? I want to receive all IP addresses associated with all pods for selected headless service? I don’t want to expose it outside the cluster. I am currently creating back-end layer.
As stated in the following answer:
What exactly is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?
„Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment.”
Thus the pods in back-end layer can communicate to each other.
I can’t use dig command. It is not installed in minikube. Eventually how do I install it? There is no apt available.
I hope this explains more accurately what I want to achieve.
You mentioned that you want to receive IP addresses associated with pods for selected service name for testing how does headless service work.
For only testing purposes you can use port-forwarding. You can forward traffic from your local machine to dns pod in your cluster. To do this, you need to run:
kubectl port-forward svc/kube-dns -n kube-system 5353:53
and it will expose kubs-dns service on your host. Then all you need is to use dig command (or alternative) to query the dns server.
dig #127.0.0.1 -p 5353 +tcp +short <service>.<namespace>.svc.cluster.local
You can also test your dns from inside of cluster e.g. by running a pod with interactive shell:
kubectl run --image tutum/dnsutils dns -it --rm -- bash
root#dns:/# dig +search <service>
Let me know it it helped.
As illustrated in kubernetes/minikube issue 4397
Containers don't have an IP address by default.
You'll want to use minikube service or minikube tunnel to get endpoint information.
See "hello-minikube/ Create a service":
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster.
To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.
Expose the Pod to the public internet using the kubectl expose command:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.
View the Service you just created:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Run the following command:
minikube service hello-node

what is the use of external IP address with a ClusterIP service type in kubernetes

What is the use of external IP address option in kubernetes service when the service is of type ClusterIP
An ExternalIP is just an endpoint through which services can be accessed from outside the cluster, so a ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too. For instance, you could set the ExternalIP to the IP of one of your k8s nodes, or create an ingress to your cluster on that IP.
ClusterIP is the default service type in Kubernetes which allows you to reach your service only within the cluster.
If your service type is set as LoadBalancer or NodePort, ClusterIP is automatically created and LoadBalancer or NodePort service will route to this ClusterIP IP address.
The new external IP addresses are only allocated with LoadBalancer type.
You can also use the node's external IP addresses when you set your service as NodePort. But in this case you will need extra firewall rules for your nodes to allow ingress traffic for your exposed node ports.
When you are using service with type: ClusterIP it has only Cluster IP and no external IP address <none>.
ClusterIP is unique Ip given from IP pool to a service and to access pods of this service cluster IP can be used only inside a cluster.Cluster IP is default service type in kubernetes.
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
above example will create a service with external IP and cluster IP.
In case of loadbalancer,nodeport services ,the service can be accessed from other clusters through externalIP
Just to add to coolinuxoid answer. I'm working with GKE and based on their documentation when you add a Service of type ClusterIP they offer to access it via:
Accessing your Service
List your running Pods:
kubectl get pods
In the output, copy one of the Pod names that begins with
my-deployment.
NAME READY STATUS RESTARTS AGE
my-deployment-dbd86c8c4-h5wsf 1/1 Running 0 2m51s
Get a shell into one of your running containers:
kubectl exec -it pod-name -- sh
where pod-name is the name of one of the Pods in my-deployment.
In your shell, install curl:
apk add --no-cache curl
In the container, make a request to your Service by using your cluster
IP address and port 80. Notice that 80 is the value of the port field
of your Service. This is the port that you use as a client of the
Service.
curl cluster-ip:80
So, while you might find a way to route to such a Service from outside, it's not a recommended/ordinary way to do it.
As a better alternative either use:
LoadBalancer if you are working on a project with a lot of services and detailed requirements.
NodePort if you are working on something smaller where cloud-native load balancing is not needed and you don't mind using your node's IP directly to map it to the Service. Incidentally, the same document does suggest an option to do so(tested it personally; works like a charm):
If the nodes in your cluster have external IP addresses, find the
external IP address of one of your nodes:
kubectl get nodes --output wide
The output shows the external IP addresses of your nodes:
NAME STATUS ROLES AGE VERSION EXTERNAL-IP
gke-svc-... Ready none 1h v1.9.7-gke.6 203.0.113.1
Not all clusters have external IP addresses for nodes. For example,
the nodes in private clusters do not have external IP addresses.
Create a firewall rule to allow TCP traffic on your node port:
gcloud compute firewall-rules create test-node-port --allow
tcp:node-port
where: node-port is the value of the nodePort field of your Service.

For pod-to-pod communication, what IP should be used? The service's ClusterIP or the endpoint

I've deployed the Rancher Helm chart to my Kubernetes cluster and want to access the Rancher API/UI from another pod (i.e. a pod running an ingress-controller).
when I list the services and the endpoints. The IP addresses differ:
$ kubectl get ep | grep rancher
release-name-rancher 10.200.23.13:80 18h
and
$ kubectl get services | grep rancher
release-name-rancher ClusterIP 10.100.200.253 <none> 80/TCP 18h
Within the container of the client (i.e. the ingress controller), I see the service beeing represented with the service's ClusterIP:
$ env | grep RELEASE_NAME_RANCHER_SERVICE_HOST
RELEASE_NAME_RANCHER_SERVICE_HOST=10.100.200.253
Trying to reach the backend via the IP address in the Env does not work (curl 10.100.200.253 just delivers no response and blocks forever).
Trying to reach the backend via the endpoint address works:
$ curl 10.200.23.13
Found.
I'm quite confused why the endpoint IP address and the ClusterIP address differ and why is it not possible to connect to the ClusterIP address. Any hints to polish my understanding?
In Kubernetes, every Pod and Service gets its own IP address. The kubectl get services IP address is the Kubernetes-internal address of the Service; the get ep address address of the Pod behind it. The Service actually acts like a load balancer, and there can be multiple Pods attached to it. The Kubernetes Service documentation goes into a lot of detail about what exactly is happening here.
Kubernetes also provides an internal DNS service that can resolve Service names. You generally shouldn't use any of these IP addresses directly; instead, use the host name release-name-rancher.default.svc.cluster.local (or replace "default" if you're running in some other Kubernetes namespace).
While the ..._SERVICE_HOST environment variable you reference is supported and documented, I'd avoid using it. Of particular note, if you helm install or kubectl apply a large set of resources at once and the Pod gets created before the Service, you'll be in a consistent state except that the Pod won't actually have this environment variable. In a Helm land where Services don't have fixed names, the environment variable name won't be fixed either. Prefer the DNS name.

External IP assignment with Minihube ingress add-on enabled

For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
kubectl get services --watch
I want to achieve the same with Minikube (if possible).
Here is how I try to set things up locally on my OSX development machine:
minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
I see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
External-Ip field never gets updated and shows pending phase. Is it possible to achieve external IP assignment with Minikube?
On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.
To have the same on minikube you'd have to run some kind of an LB controller, ie. haproxy one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by minikube ip so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.
Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.
If you want to emulate external IP assignment event (like the one you can observe using GKE or AWS), this can be achieved by applying the following patch on your sandbox kubernetes:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
https://github.com/elsonrodriguez/minikube-lb-patch#assigning-external-ips