External ip always <none> or <pending> in kubernetes - kubernetes

Recently i started building my very own kubernetes cluster using a few Raspberry pi's.
I have gotten to the point where i have a cluster up and running!
Some background info on how i setup the cluster, i used this guide
But now, when i want to deploy and expose an application i encounter some issues...
Following the kubernetes tutorials i have made an deployment of nginx, this is running fine. when i do a port-forward i can see the default nginx page on my localhost.
Now the tricky part, creating an service and routing the traffic from the internet through an ingress to the service.
i have executed the following command's
kubectl expose deployment/nginx --type="NodePort" --port 80
kubectl expose deployment/nginx --type="Loadbalancer" --port 80
And these result in the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx NodePort 10.103.77.5 <none> 80:30106/TCP 7m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
nginx LoadBalancer 10.107.233.191 <pending> 80:31332/TCP 4s
The external ip address never shows, which makes it quite impossible for me to access the application from outside of the cluster by doing curl some-ip:80 which in the end is the whole reason for me to setup this cluster.
If any of you have some clear guides or advice i can work with it would be really appreciated!
Note:
I have read things about LoadBalancer, this is supposed to be provided by the cloud host. since i run on RPI i don't think this will work for me. but i believe NodePort should be just fine to route with an ingress.
Also i am aware of the fact that i should have an ingress-controller of some sort for ingress to work.
Edit
So i have the following now for the nodeport - 30168
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.96.125.112 <none> 80:30168/TCP 6m20s
and for the ip address i have either 192.168.178.102 or 10.44.0.1
$ kubectl describe pod nginx-688b66fb9c-jtc98
Node: k8s-worker-2/192.168.178.102
IP: 10.44.0.1
But when i enter either of these ip addresses in the browser with the nodeport i still don't see the nginx page. am i doing something wrong?

Any of your worker nodes' IP address will work for a NodePort (or LoadBalancer) service. From the description of NodePort services :
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
If you don't know those IP addresses kubectl get nodes can tell you; if you're planning on calling them routinely then setting up a load balancer in front of the cluster or configuring DNS (or both!) can be helpful.
In your example, say some node has the IP address 10.20.30.40 (you log into the Raspberry PI directly and run ifconfig and that's the host's address); you can reach the nginx from the second example at http://10.20.30.40:31332.
The EXTERNAL-IP field will never fill in for a NodePort service, or when you're not in a cloud environment that can provide an external load balancer for you. That doesn't affect this case, for either of these service types you can still call the port on the node directly.

Since you are not in a cloud provider, you need to use MetalLB to have the LoadBalancer features working.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible
The MetalLB setup is very easy:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
This will deploy MetalLB to your cluster, under the metallb-system namespace
You need to create a configMap with the ip range you want to use, create a file named metallb-cf.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250 <= Select the range you want.
kubectl apply -f metallb-cf.yaml
That's all.
To use on your services just create with type LoadBalancer and MetalLB will do the rest. If you want to customize the configuration see here
MetalLB will assign a IP for your service/ingress, but if you are in a NAT network you need to configure your router to forward the requests for your ingress/service IP.
EDIT:
You have problem to get External IP with MetalLB running on Raspberry Pi, try to change iptables to legacy version:
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
Reference: https://www.shogan.co.uk/kubernetes/building-a-raspberry-pi-kubernetes-cluster-part-2-master-node/
I hope that helps.

Related

What does "within the cluster" mean in the context of ClusterIP service?

I have a Kubernetes cluster with the followings:
A deployment of some demo web server
A ClusterIP service that exposes this deployment pods
Now, I have the cluster IP of the service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d3h
svc-clusterip ClusterIP 10.98.148.55 <none> 80/TCP 16m
Now I can see that I can access this service from the host (!) - not within a Pod or anything:
$ curl 10.98.148.55
Hello world ! Version 1
The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.
The Kubernetes docs state that:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
It's not clear what is meant by "within the cluster" - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?
does that mean within a container (pod) in the cluster? or even from the nodes themselves
You can access the ClusterIP from KubeNode and pods. This IP is a virtual IP, and It only works within the cluster. One way it works is ( apart from CNI), Using Linux kernel's iptables/IPVS feature it rewrites the packet with Pod IP address and Load balances among the pods. These rules are maintained by KubeProxy
The definition from the Kubernetes documentation says it's reachable from within the cluster:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
The question is, how do we understand "the cluster"?
Well, the Kubernetes cluster is:
A Kubernetes cluster is a set of node machines for running containerized applications.
So answering your question:
It's not clear what is meant by "within the cluster" - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?
Yes, it means nodes + pods that are running on these nodes, so your behaviour is normal and expected.
Another question:
The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.
Basically the CNI plugin is responsible for network communication for the pods:
A CNI plugin is responsible for inserting a network interface into the container network namespace (e.g. one end of a veth pair) and making any necessary changes on the host (e.g. attaching the other end of the veth into a bridge). It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.
I did a quick test with a fresh kubadm cluster setup without CNI plugin. I created a sample Nginx deployment and then service for it. What I observed?
the service is created properly and has an IP address assigned, however...
user#example-ubuntu-kubeadm-template-clear-1:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
my-service ClusterIP 10.109.245.27 <none> 80/TCP 6m52s
the pods from the deployment did not started because the node has a taint:
1 node(s) had taint {node.kubernetes.io/not-ready: }
In the kubectl describe node {node-name} I can find:
container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
So yeah, services are independent from the CNI plugin, but without CNI plugin you can not start any pods so services are useless.
It depends on the cluster setup. If you are using a GKE cluster and it is setup as VPC native, then you will be able to reach the clusterIP service from a host in the same VPC.

expose Istio-gateway on port 80

I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.
Everything works fine since I can access my services from outside using the ingress-gateway NodePort.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
In our case the port 31106.
The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?
In other word, instead of typing http://example.com:31106/ , I want them to be able to type http://example.com/
Any solution could help.
Based on official documentation:
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> (or perpetually <pending>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
This is in line with what David Maze wrote in the comment:
A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up metallb for an implementation of that. The NodePort port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.
In your situation you need to access the gateway using the NodePort. Then you can configure istio. Everything is described step by step in this doc. You need to choose the instructions corresponding to NodePort and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.

How to connect to a GKE service from GCE using internal IPs

I have an Nginx service deployed in GKE with a NodePort exposed and i want to connect it from my Compute Engine instances through internal IP address only. When i try to connect to the Nginx with the cluster IP i only receive Timeout.
I think that clusterIP is only reachable inside a cluster but when i activated the NodePort might be works.
I am not know well the difference between NodePort and ClusterIP.
Background
You can expose your application outside cluster using NodePort or LoadBalancer. ClusterIP allows connection only inside the cluster and it's default Service type.
ClusterIP:
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
NodePort:
Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
In short, when you are using NodePort you need to use NodePublicIP:NodePort. When you are using LoadBalancer it will create Network LB with ExternalIP.
In your GKE cluster you have something called VPC - Virtual Private Cloud which provides networking for your cloud-based resources and services that is global, scalable, and flexible.
Solution
Using VPC-Native CLuster
Wit VPC-native clusters you'll be able to reach to Pod's IPs directly. You will need to create subnet in order to do it. Full guide can be found here
Using VPC Peering
If you would like to connect from 2 different projects in GKE, you will need to use VPC Peering.
Access from outside the cluster using NodePort
If you would like to reach your nginx service from outside you can use NodeIP:NodePort.
NodeExternalIP (keep in mind that this node must have application pod on it. If you have 3 nodes and only 1 application replica, you must use NodeExternalIP where this pod was deployed. Another node, you need to allow NodePort access on Firewall.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-cluster-1-default-pool-faec7b51-n5hm Ready <none> 3h23m v1.17.14-gke.1600 10.128.0.26 23.236.50.249 Container-Optimized OS from Google 4.19.150+ docker://19.3.6
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.8.9.10 <none> 80:30785/TCP 39m
$ curl 23.236.50.249:30785
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Cluster IP address is only accessible within cluster; so that's why it is giving timeout message. Nodeport use to expose a port on Public IP of every node of cluster; so it may work.

k3d no external ip for a service of load balanacer type

i am deploying the hello-world docker container to a k3d - cluster.
To get the external IP, a service of the type - load balancer is deployed.
After that i was hoping to call the appication via load balancer. But i don't get the external ip.
k3d create --name="mydemocluster" --workers="2" --publish="80:80"
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
kubectl run kubia --image=hello-world --port=8080 --generator=run/v1
kubectl expose rc kubia --type=LoadBalancer --name kubia-http
export KUBECONFIG="$(k3d get-kubeconfig --name='mydemocluster')"
then kubectl get services:
LoadBalancer type service will get external IP only if you use a managed kubernetes Service provided by cloud providers such as AWS EKS, Azure AKS, Google GCP etc.Tools such as k3d is for local development and if you create a LoadBalancer type service external ip will be pending. Alternative is to use NodePort type service or ingress . Here is the doc on this.
Also you can use kubectl port forward or kubectl proxy to access the pod.
I was following this example
with k3d and there it seems to work fine:
(base) erik#buzzard:~/kubernetes/tutorial>
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d6h
mongodb-service ClusterIP 10.43.215.113 <none> 27017/TCP 27m
mongo-express-service LoadBalancer 10.43.77.100 172.20.0.2 8081:30000/TCP 27m
As I understand, k3d is running k3s which is more of a full kubernetes setup than minikube for instance. I can access the service at http://172.20.0.2:8081 without problems.
You'll need a cloud controller manager to act as a service controller to do that. As far as on-prem goes, your best option is likely MetalLB.
That being said, I don't know how that will behave with the underlying docker network in K3d. It's on my list of things to try out. If I find it works well, I'll come back and update this post.
I solved this by changing my manifest from a LoadBalancer type to an Ingress type. K3d doesn't seem to expose external IP's properly to a load balancer type.
Oddly, I did find I was able to get the LoadBalancer type to work if I deployed really quickly. It seemed it had to be after the master node was up and before any agents were up.

External IP assignment with Minihube ingress add-on enabled

For development purposes I try to use Minikube. I want to test how my application will catch an event of exposing a service and assigning an External-IP.
When I exposed a service in Google Container Engine quick start tutorial I could see an event of External IP assignment with:
kubectl get services --watch
I want to achieve the same with Minikube (if possible).
Here is how I try to set things up locally on my OSX development machine:
minikube start --vm-driver=xhyve
minikube addons enable ingress
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment echoserver --type="LoadBalancer"
kubectl get services --watch
I see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver LoadBalancer 10.0.0.138 <pending> 8080:31384/TCP 11s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4m
External-Ip field never gets updated and shows pending phase. Is it possible to achieve external IP assignment with Minikube?
On GKE or AWS installs, the external IP comes from the cloud support that reports back to kube API the address that the created LB was assigned.
To have the same on minikube you'd have to run some kind of an LB controller, ie. haproxy one, but honestly, for minikube it makes little sense, as you have single IP that you know in advance by minikube ip so you can use NodePort with that knowledge. LB solution would require setting some IP rangethat can be mapped to particular nodeports, as this is effectively what LB will do - take traffic from extIP:extPort and proxy it to minikubeIP:NodePort.
Unless your use case prevents you from it, you should consider Ingress as the way of ingesting traffic to your minikube.
If you want to emulate external IP assignment event (like the one you can observe using GKE or AWS), this can be achieved by applying the following patch on your sandbox kubernetes:
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
https://github.com/elsonrodriguez/minikube-lb-patch#assigning-external-ips