How to expose a service in Kubernetes? - kubernetes

My organization offers Containers as a Service through Rancher. I start a rabbitmq service using some web interface. The service started OK. I'm having trouble accessing this service through an external IP.
Using kubectl, I tried to get the list of the running services:
$ kubectl get services -n flash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-ha ClusterIP XX.XX.X.XXX <none> 15672/TCP,5672/TCP,4369/TCP 46m
rabbitmq-ha-discovery ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 46m
How do I expose the 'rabbitmq-ha' service to the external word so I can access it via IP address:15672, etc? Right now, the external IP is none. I'm not sure how to get kubernetes to assign one.

If you are in supported cloud environment(AWS, GCP, Azure...etc) then you can create a service of type Loadbalancer and an external Load Balancer will be provisioned and an external IP or DNS will be assigned by your cloud provider.Here is the docs on this.
If you are on bare metal on prem then you can use melatLB which provides an implementation of LoadBalancer.
Apart from above you can also use Nodeport Type service to expose a service to be accessible outside your kubernetes cluster. Here is guide on how to do that.
One disadvantage of using LoadBalancer type service is that for every service an external load balancer will be provisioned which is costly, as an alternative you can use ingress abstraction. Ingress is implemented by many softwares such as nginx, HAProxy, traefik.

Related

expose Istio-gateway on port 80

I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.

How to connect to a GKE service from GCE using internal IPs

I have an Nginx service deployed in GKE with a NodePort exposed and i want to connect it from my Compute Engine instances through internal IP address only. When i try to connect to the Nginx with the cluster IP i only receive Timeout.
I think that clusterIP is only reachable inside a cluster but when i activated the NodePort might be works.
I am not know well the difference between NodePort and ClusterIP.
Background
You can expose your application outside cluster using NodePort or LoadBalancer. ClusterIP allows connection only inside the cluster and it's default Service type.
ClusterIP:
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
NodePort:
Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
In short, when you are using NodePort you need to use NodePublicIP:NodePort. When you are using LoadBalancer it will create Network LB with ExternalIP.
In your GKE cluster you have something called VPC - Virtual Private Cloud which provides networking for your cloud-based resources and services that is global, scalable, and flexible.
Solution
Using VPC-Native CLuster
Wit VPC-native clusters you'll be able to reach to Pod's IPs directly. You will need to create subnet in order to do it. Full guide can be found here
Using VPC Peering
If you would like to connect from 2 different projects in GKE, you will need to use VPC Peering.
Access from outside the cluster using NodePort
If you would like to reach your nginx service from outside you can use NodeIP:NodePort.
NodeExternalIP (keep in mind that this node must have application pod on it. If you have 3 nodes and only 1 application replica, you must use NodeExternalIP where this pod was deployed. Another node, you need to allow NodePort access on Firewall.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 3h23m v1.17.14-gke.1600 10.128.0.26 23.236.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.8.9.10 <none> 80:30785/TCP 39m
$ curl 23.236.50.249:30785
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Cluster IP address is only accessible within cluster; so that's why it is giving timeout message. Nodeport use to expose a port on Public IP of every node of cluster; so it may work.

k3d no external ip for a service of load balanacer type

i am deploying the hello-world docker container to a k3d - cluster.
To get the external IP, a service of the type - load balancer is deployed.
After that i was hoping to call the appication via load balancer. But i don't get the external ip.
k3d create --name="mydemocluster" --workers="2" --publish="80:80"
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
kubectl run kubia --image=hello-world --port=8080 --generator=run/v1
kubectl expose rc kubia --type=LoadBalancer --name kubia-http
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
then kubectl get services:
LoadBalancer type service will get external IP only if you use a managed kubernetes Service provided by cloud providers such as AWS EKS, Azure AKS, Google GCP etc.Tools such as k3d is for local development and if you create a LoadBalancer type service external ip will be pending. Alternative is to use NodePort type service or ingress . Here is the doc on this.
Also you can use kubectl port forward or kubectl proxy to access the pod.
I was following this example
with k3d and there it seems to work fine:
(base) erik#buzzard:~/kubernetes/tutorial>
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d6h
mongodb-service ClusterIP 10.43.215.113 <none> 27017/TCP 27m
mongo-express-service LoadBalancer 10.43.77.100 172.20.0.2 8081:30000/TCP 27m
As I understand, k3d is running k3s which is more of a full kubernetes setup than minikube for instance. I can access the service at http://172.20.0.2:8081 without problems.
You'll need a cloud controller manager to act as a service controller to do that. As far as on-prem goes, your best option is likely MetalLB.
That being said, I don't know how that will behave with the underlying docker network in K3d. It's on my list of things to try out. If I find it works well, I'll come back and update this post.
I solved this by changing my manifest from a LoadBalancer type to an Ingress type. K3d doesn't seem to expose external IP's properly to a load balancer type.
Oddly, I did find I was able to get the LoadBalancer type to work if I deployed really quickly. It seemed it had to be after the master node was up and before any agents were up.

Kubernetes on Google Cloud - Access pod port without port-forwarding

I have a Google Cloud Project with:
Internal network.
I also deployed Kubernetes using this internal network.
I deployed a deployment with a service (no external IP).
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.0.0.213 <none> 6379/TCP 27s
Now, I also deployed another VM instance within the same internal network. I want that this VM will access the IP: 10.0.0.213 on port 6379. But it's not working.
I read here, that I need to port-forward it in order to make it possible. But I don't want to expose my kubernetes cluster credentials in this VM.
LoadBalacer will give me external IP, which will work within the internal network but will work also from the internet.
So, how to expose it just to the Google internal network?
I guess what you need is an Internal Load Balancer. You can simply annotate the Service with cloud.google.com/load-balancer-type: "Internal". See the internal-load-balancing.

Access EKS DNS from worker nodes and other EC2 instances in same VPC

I have created an EKS cluster by following the getting started guide by AWS with k8s version 1.11. I have not changed any configs as such for kube-dns.
If I create a service let's say myservice, I would like to access it from some other ec2 instance which is not part of this eks cluster but it is in same VPC.
Basically, I want this DNS to work as my DNS server for instances outside the cluster as well. How will I be able to do that?
I have seen that the kube-dns service gets a cluster IP but doesn't get an external IP, is that necessary for me to be able to access it from outside the cluster?
This is the current response :
[ec2-user#ip-10-0-0-149 ~]$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 4d
My VPC subnet is 10.0.0.0/16
I am trying to reach this 172.20.0.10 IP from other instances in my VPC and I am not able to, which I think is expected because my VPC is not aware of any subnet range that is 172.20.0.10. But then how do make this dns service accessible to all my instances in VPC?
The problem you are facing is mostly not related to DNS. As you said you cannot reach ClusterIP from your other instances because it is internal cluster network and it is unreachable from outside of Kubernetes.
Instead of going into the wrong direction I recommend you to make use of Nginx Ingress which allows you to create Nginx backed by AWS Load Balancer and expose your services on that Nginx.
You can further integrate your Ingresses with External-DNS addon which will allow you to dynamically create DNS records in Route 53.
This will take some time to configure but this is the Kubernetes way.