Service external IP pending on kubernetes hosted on jelastic - kubernetes

I have installed my kubernetes cluster on Jelastic. Now, I tried to define a service of LoadBalancer type and would like it to be provided with an external IP. The external IP is currently marked as pending. What should I do to make it non-pending? Do I have to provide the worker nodes with an external IPv4?
In my current setup, my worker nodes have no IPv4 because I put an nginx load-balancer in front of the cluster:
The IPv4 is set on the nginx node. Is that a problem? If I want to access my loadbalancer service inside of my kubernetes cluster, what should I do?

For LoadBalancer service type to work, the cloud provider must implemenet the relevant APIs to get it to work.
With regard to Jelastic, as per their docs, they don't support it https://docs.jelastic.com/kubernetes-exposing-services/:
Jelastic PaaS does not support the LocaBalancer service type currently.

In Jelastic Public IP addresses have to be attached to worker nodes.
Every worker node has ingress controller instance running (based oт nginx/haproxy/traefik) with http/https listeners that can forward traffic to the required service.
You have just to bind your domain as CNAME to Environment FQDN and every your worker node can accept requests in RR-DNS mode.
Does this scenario works for you or you have a specific requirement to use external load balancer?
By default, when Public IPs are not attached to worker instances the traffic is going through the Shared Load Balancer.
P.S. If you install Certification Manager Addon to your K8s cluster - you can also issue free Let's Encrypt certificates.

Related

Real IP (Domains and Subtomains) on Bare Metal Cluster with MatalLB and Ingress

help me figure it out.
I have a Bare Metal Kubernetes cluster with three nodes, each node has a public ip.
I have installed MetalLB and IngressController.
It is not clear to me which IP should I redirect domains and subdomains to so that they can be resolved by the Ingress Controller?
I need to initially define on which node the Ingress Controller will be launched?
I need to install the Ingress Controller, and then look at the worker node, on which it will be installed and send all domains or subdomains there?
What happens if, after restarting the cluster, the ingress controller will be deployed on another node?
All the tutorials I've seen show how it works locally or with a cloud load balancer.
Help me understand how this should work correctly.
Usually, when you install MetalLB, you configure a pool of addresses which can be used to assign new IPs at LoadBalancer services whenever they are created. Such IP addresses need to be available, they cannot be created out of nothing of course.. they could be in lease from your hosting provider for example.
If instead you have a private Bare Metal cluster which serves only your LAN network, you could just select a private range of IP addresses which are not used.
Then, once MetalLB is running, what happens is the following:
Someone / something creates a LoadBalancer services (an HELM Chart, a user with a definition, with commands, etc)
The newly created service needs an external IP. MetalLB will select one address from the configured selected range and assign it to that service
MetalLb will start to announce using standard protocol that the IP address can now be reached by contacting the cluster, it can work either in Layer2 mode (one node of the cluster holds that additional IP address) or BGP (true load balancing across all nodes of the cluster)
From that point, you can just reach the new service by contacting this newly assigned IP address (which is NOT the ip of any of the cluster nodes)
Usually, the Ingress Controller will just bring a LoadBalancer service (which will grab an external IP address from MetalLb) and then, you can reach hte Ingress Controller from that IP.
As for your other questions, you don't need to worry about where the Ingress Controller is running or similar, it will be automatically handled.
The only thing you may want to do is to make the domain names which you want to serve point to the external IP address assigned to the Ingress Controller.
Some docs:
MetalLB explanations
Bitnami MetalLB chart
LoadBalancer service docs
As an alternative (especially when you want "static" ip addresses) I should mention HAProxy, installed external to kubernetes cluster in a bare_server/vm/lxc_container/etc. and configured to send all incoming 80/433 traffic to the NodePort of ingress controller on all kubernetes workers (if no ingress pod is running on that worker traffic will be forwarded by kubernetes).
Of course, nowadays ip addresses are also "cattle", not "pets" anymore, so MetalLB is more of a "kubernetish" solution, but who knows ...
This is the link describing HAProxy solution (I am not affiliated with the author):
https://itnext.io/bare-metal-kubernetes-with-kubeadm-nginx-ingress-controller-and-haproxy-bb0a7ef29d4e

Kubernetes networking: is service information stored in every node's IPTable?

Can someone tell me why the service hop won't become a single point of failure?
In Kubernete Service, I see an hop of Service between the client and Pods:
I guess all service's (let's say there are 5000 of services and each service has 3 Pods) routing info are stored in the IPTable of each node?
Kubernetes services connect a set of pods to an abstracted service name and IP address. Services provide discovery and routing between pods.
It depends upon the CNI which you are using and what type of network it will use. Every network plugin has a different approach for how a Pod IP address is assigned (IPAM), how iptables rules and cross-node networking are configured, and how routing information is exchanged between the nodes.

Dynamic load balancing with kubernetes

I'm new in kubernetes.
We have 50 ip addresses and ip addresses have a request limit. The limit is a value kept in the database. We want load balancer to choose it based on the one that has the most limits in the database. Can Kubernetes do that?
Firstly I advice you to read official documentation about networking in Kubernetes - you can find it here: kubernetes-networking. Especially read about services. Original Load Balancer in Kubernetes never checks application-specific databases.
An abstract way to expose an application running on a set of
Pods as a network service. With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can
load-balance across them.
Example on service type clusterIP.
Kubernetes assigns a stable, reliable IP address to each newly-created
Service (the
ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only
releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.
Take a look how it looks in GKE: GKE-IP-allocation.
You can specify also your own cluster IP address as part of a
Service creation request - set the .spec.clusterIP field.
The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range that is configured for the API server. If you try to create a Service with an invalid clusterIP address value, the API server will return a 422 HTTP status code to indicate that there's a problem.
To sum up. Kubernetes load balancer never does deep dive into your app. To connect to your app you need to create service. Kubernetes assigns a stable, reliable IP address to each newly-created Service from which you can access your app within or from outside the cluster. You can also manually assign IP per service.

Expose services inside Kubernetes cluster behind NAT

I have a GKE cluster set up with Cloud NAT, so traffic from any node/container going outward would have the same external IP. (I needed this for whitelisting purposes while working with 3rd-party services).
Now, if I want to deploy a proxy server onto this cluster that does basic traffic forwarding, how do I expose the proxy server "endpoint"? Or more generically, how do I expose a service if I deploy it to this GKE cluster?
Proxy server running behind NAT ?
Bad idea, unless it is only for your kubernetes cluster workload, but you didn't specify anywhere that it should be reachable only by other Pods running in the same cluster.
As you can read here:
Cloud NAT does not implement unsolicited inbound connections from the
internet. DNAT is only performed for packets that arrive as responses
to outbound packets.
So it is not meant to be reachable from outside.
If you want to expose any application within your cluster, making it available for other Pods, use simple ClusterIP Service which is the default type and it will be created as such even if you don't specify its type at all.
Normally, to expose a service endpoint running on a Kubernetes cluster, you have to use one of the Service Types, as Pods have internal IP addresses and are not addressable externally.
The possible service types:
ClusterIP: this also uses an internal IP address, and is therefore not addressable externally.
NodePort: this type opens a port on every node in your Kubernetes cluster, and configures iptables to forward traffic arriving to this port into the Pods providing the actual service.
LoadBalancer: this type opens a port on every node as with NodePort, and also allocates a Google Cloud Load Balancer service, and configures that service to access the port opened on the Kubernetes nodes (actually load balancing the incoming traffic between your operation Kubernetes nodes).
ExternalName: this type configures the Kubernetes internal DNS server to point to the specified IP address (to provide a dynamic DNS entry inside the cluster to connect to external services).
Out of those, NodePort and LoadBalancer are usable for your purposes. With a simple NodePort type Service, you would need publicly accessible node IP addresses, and the port allocated could be used to access your proxy service through any node of your cluster. As any one of your nodes may disappear at any time, this kind of service access is only good if your proxy clients know how to switch to another node IP address. Or you could use the LoadBalancer type Service, in that case you can use the IP address of the configured Google Cloud Load Balancer for your clients to connect to, and expect the load balancer to forward the traffic to any one of the running nodes of your cluster, which would then forward the traffic to one of the Pods providing this service.
For your proxy server to access the Internet as a client, you also need some kind of public IP address. Either you give the Kubernetes nodes public IP addresses (in that case, if you have more than a single node, you'd see multiple source IP addresses, as each node has its own IP address), or if you use private addresses for your Kubernetes nodes, you need a Source NAT functionality, like the one you already use: Cloud NAT

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.