How to assign external IP address to running service? - kubernetes

I have the following service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rancher ClusterIP 10.245.162.197 <none> 80/TCP 10h
that I would like to assign an EXTERNAL-IP to it. I tried:
kubectl expose deployment rancher --type=LoadBalancer --name=rancher-access
but the EXTERNAL-IP does not still get assigned. I am using Digital Ocean Kubernetes.
How to get an EXTERNAL-IP for rancher service.

You have two options:
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean supported cloud providers, so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a Digital Ocean's floating IP.
To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
Use Digital Ocean created cloud provider implementation
You could use an NGINX ingress controller and point a DigitalOcean LB to the host where the controller is deployed. With some more tinkering you could probably make this a highly available setup
https://github.com/hobby-kube/guide#bringing-traffic-to-the-cluster

Related

expose Istio-gateway on port 80

I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.

How to connect to a GKE service from GCE using internal IPs

I have an Nginx service deployed in GKE with a NodePort exposed and i want to connect it from my Compute Engine instances through internal IP address only. When i try to connect to the Nginx with the cluster IP i only receive Timeout.
I think that clusterIP is only reachable inside a cluster but when i activated the NodePort might be works.
I am not know well the difference between NodePort and ClusterIP.
Background
You can expose your application outside cluster using NodePort or LoadBalancer. ClusterIP allows connection only inside the cluster and it's default Service type.
ClusterIP:
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
NodePort:
Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
In short, when you are using NodePort you need to use NodePublicIP:NodePort. When you are using LoadBalancer it will create Network LB with ExternalIP.
In your GKE cluster you have something called VPC - Virtual Private Cloud which provides networking for your cloud-based resources and services that is global, scalable, and flexible.
Solution
Using VPC-Native CLuster
Wit VPC-native clusters you'll be able to reach to Pod's IPs directly. You will need to create subnet in order to do it. Full guide can be found here
Using VPC Peering
If you would like to connect from 2 different projects in GKE, you will need to use VPC Peering.
Access from outside the cluster using NodePort
If you would like to reach your nginx service from outside you can use NodeIP:NodePort.
NodeExternalIP (keep in mind that this node must have application pod on it. If you have 3 nodes and only 1 application replica, you must use NodeExternalIP where this pod was deployed. Another node, you need to allow NodePort access on Firewall.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 3h23m v1.17.14-gke.1600 10.128.0.26 23.236.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.8.9.10 <none> 80:30785/TCP 39m
$ curl 23.236.50.249:30785
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Cluster IP address is only accessible within cluster; so that's why it is giving timeout message. Nodeport use to expose a port on Public IP of every node of cluster; so it may work.

External ip always <none> or <pending> in kubernetes

Recently i started building my very own kubernetes cluster using a few Raspberry pi's.
I have gotten to the point where i have a cluster up and running!
Some background info on how i setup the cluster, i used this guide
But now, when i want to deploy and expose an application i encounter some issues...
Following the kubernetes tutorials i have made an deployment of nginx, this is running fine. when i do a port-forward i can see the default nginx page on my localhost.
Now the tricky part, creating an service and routing the traffic from the internet through an ingress to the service.
i have executed the following command's
kubectl expose deployment/nginx --type="NodePort" --port 80
kubectl expose deployment/nginx --type="Loadbalancer" --port 80
And these result in the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx NodePort 10.103.77.5 <none> 80:30106/TCP 7m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx LoadBalancer 10.107.233.191 <pending> 80:31332/TCP 4s
The external ip address never shows, which makes it quite impossible for me to access the application from outside of the cluster by doing curl some-ip:80 which in the end is the whole reason for me to setup this cluster.
If any of you have some clear guides or advice i can work with it would be really appreciated!
Note:
I have read things about LoadBalancer, this is supposed to be provided by the cloud host. since i run on RPI i don't think this will work for me. but i believe NodePort should be just fine to route with an ingress.
Also i am aware of the fact that i should have an ingress-controller of some sort for ingress to work.
Edit
So i have the following now for the nodeport - 30168
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.96.125.112 <none> 80:30168/TCP 6m20s
and for the ip address i have either 192.168.178.102 or 10.44.0.1
$ kubectl describe pod nginx-688b66fb9c-jtc98
Node: k8s-worker-2/192.168.178.102
IP: 10.44.0.1
But when i enter either of these ip addresses in the browser with the nodeport i still don't see the nginx page. am i doing something wrong?
Any of your worker nodes' IP address will work for a NodePort (or LoadBalancer) service. From the description of NodePort services :
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
If you don't know those IP addresses kubectl get nodes can tell you; if you're planning on calling them routinely then setting up a load balancer in front of the cluster or configuring DNS (or both!) can be helpful.
In your example, say some node has the IP address 10.20.30.40 (you log into the Raspberry PI directly and run ifconfig and that's the host's address); you can reach the nginx from the second example at http://10.20.30.40:31332.
The EXTERNAL-IP field will never fill in for a NodePort service, or when you're not in a cloud environment that can provide an external load balancer for you. That doesn't affect this case, for either of these service types you can still call the port on the node directly.
Since you are not in a cloud provider, you need to use MetalLB to have the LoadBalancer features working.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible
The MetalLB setup is very easy:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
This will deploy MetalLB to your cluster, under the metallb-system namespace
You need to create a configMap with the ip range you want to use, create a file named metallb-cf.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250 <= Select the range you want.
kubectl apply -f metallb-cf.yaml
That's all.
To use on your services just create with type LoadBalancer and MetalLB will do the rest. If you want to customize the configuration see here
MetalLB will assign a IP for your service/ingress, but if you are in a NAT network you need to configure your router to forward the requests for your ingress/service IP.
EDIT:
You have problem to get External IP with MetalLB running on Raspberry Pi, try to change iptables to legacy version:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
Reference: https://www.shogan.co.uk/kubernetes/building-a-raspberry-pi-kubernetes-cluster-part-2-master-node/
I hope that helps.

How to expose a service in Kubernetes?

My organization offers Containers as a Service through Rancher. I start a rabbitmq service using some web interface. The service started OK. I'm having trouble accessing this service through an external IP.
Using kubectl, I tried to get the list of the running services:
$ kubectl get services -n flash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq-ha ClusterIP XX.XX.X.XXX <none> 15672/TCP,5672/TCP,4369/TCP 46m
rabbitmq-ha-discovery ClusterIP None <none> 15672/TCP,5672/TCP,4369/TCP 46m
How do I expose the 'rabbitmq-ha' service to the external word so I can access it via IP address:15672, etc? Right now, the external IP is none. I'm not sure how to get kubernetes to assign one.
If you are in supported cloud environment(AWS, GCP, Azure...etc) then you can create a service of type Loadbalancer and an external Load Balancer will be provisioned and an external IP or DNS will be assigned by your cloud provider.Here is the docs on this.
If you are on bare metal on prem then you can use melatLB which provides an implementation of LoadBalancer.
Apart from above you can also use Nodeport Type service to expose a service to be accessible outside your kubernetes cluster. Here is guide on how to do that.
One disadvantage of using LoadBalancer type service is that for every service an external load balancer will be provisioned which is costly, as an alternative you can use ingress abstraction. Ingress is implemented by many softwares such as nginx, HAProxy, traefik.

Kubernetes is giving junk external ip

Kubernetes is giving junk external ip, check output of below command:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Please help me to understand what's this external IP means
According to this : https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
It seems you selected LoadBalancer type your cloud provider provided you a loadbalancer and that externalip is that loadbalancer dns name.
The options that allows you expose your application for access from outside the cluster are:
Kubernetes Service of type LoadBalancer
Kubernetes Service of type ‘NodePort’ + Ingress
A Service in Kubernetes is an abstraction defining a logical set of Pods and an access policy and it can be exposed in different ways by specifying a type (ClusterIP, NodePort, LoadBalancer) in the service spec. The LoadBalancer type is the simplest approach.
Once the service is created, it has an external IP address as in your output:
$ kubectl get svc frontend -n web-console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 100.68.90.01 a55ea503bbuddd... 80:31161/TCP 5d
Now, service 'frontend' can be accessible from outside the cluster without the need for additional components like an Ingress.
To test the external IP run this curl command from your machine:
$ curl http://<external-ip>:<port>
where is the external IP address of your Service, and is the value of Port in your Service description.
ExternalIP gives possibility to access services from outside the cluster (ExternalIP is an endpoint). A ClusterIP type service with an ExternalIP can still be accessed inside the cluster using its service.namespace DNS name, but now it can also be accessed from its external endpoint, too.