Losing the external IPs after K8s update on GCP - kubernetes

On doing K8s updates on GCP we lose the link between the nodes and their external IPs. That causes some issues afterwards on K8s apps communicating with other clouds secured by firewalls.
I have to assign them manually afterwards again. Why is this? Can I prevent this somehow?

First of all, ensure you have set your IP to static in the cloud console -> Networking -> External IP addresses.
Once it's set to static you can assign your Service to the static IP using the loadBalancerIP property. Note that your Service should be a LoadBalancer type. See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer for more information.
If you don't require a Loadbalancer you could also try out https://kubernetes.io/docs/concepts/services-networking/service/#external-ips

Related

How to keep IP address of a pod static after pod dies

I am new to learning kubernetes, and I understand that pods have dynamic IP and require some other "service" resource to be attached to a pod to use the fixed IP address. What service do I require and what is the process of configuration & How does AWS-ECR fit into all this.
So if I have to communicate from a container of a pod to google.com, Can I assume my source as the IP address of the "service", if I have to establish a connection?
Well, for example on Azure, this feature [Feature Request] Pod Static IP is under request:
See https://github.com/Azure/AKS/issues/2189
Also, as I know, you can currently assign an existing IP adress to a load balancer service or an ingress controller
See https://learn.microsoft.com/en-us/azure/aks/static-ip
By default, the public IP address assigned to a load balancer resource
created by an AKS cluster is only valid for the lifespan of that
resource. If you delete the Kubernetes service, the associated load
balancer and IP address are also deleted. If you want to assign a
specific IP address or retain an IP address for redeployed Kubernetes
services, you can create and use a static public IP address
As you said we needs to define a service which selects all the required pods and then you would be sending requests to this service instead of the pods.
I would suggest you to go through this https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types.
The type of service you need basically depends on the use-case.
I will give a small overview so you get an idea.
Usually when pods only have internal requests ClusterIP is used
Node port allow external requests but is basically used for testing and not for production cases
If you also have requests coming from outside the cluster you would usually use load balancer
Then there is another option for ingress
As for AWS-ECR, its basically a container registry where you store your docker images and pull from it.

It is possible config a VM instance on GCP to only receive requests from the Load Balancer?

I have an nginx deployment on k8s that is exposed via a nodeport service. Is it possible by using the GCP firewall to permit that only an application LB can talk with these nodeports?
I wouldn't like to let these two nodeports opened to everyone.
Surely you can controll access traffics to your VM instance via firewall.
That is why firewall service exitsts.
If you created a VM in the default VPC and firewall setting environment, firewall will deny all traffics from outside.
You just need to write a rule to allow traffic from the application LB.
According to Google document, You need to allow from 130.211.0.0/22 and 35.191.0.0/16 IP ranges.

How to assign a single static source IP address for all pods of a service or deployment in kubernetes?

Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.

Changing Kubernetes cluster IP to internal IP

I have created a Kubernetes cluster in Google Cloud. I have done it a few months ago and configured the cluster to have external IP address limited with authorized networks.
I want to change the cluster IP to internal IP. Is this possible without re-creating the cluster?
As documented here, you currently "cannot convert an existing, non-private cluster to a private cluster."
Having said that, you'll need to create a new private cluster from scratch, which will have both an external IP and an internal IP. However, you'll be able to disable access to the external IP or restrict access to it as per your needs. Have a look here for the different settings available.

kubernetes on gke / why a load balancer use is enforced?

Made my way into kubernetes through GKE, currently trying out via kubeadm on bare metal.
In the later environment, there is no need of any specific load balancer; using nginx-ingress and ingresses let one serve service to the www.
Oppositely, on gke, using the same nginx-ingress, or using the gke provided l7, you always end up with a billed load balancer.
What's the reason about that, as it seemed not to be ultimately needed ?
(Reposting my comment above)
In general, when one is receiving traffic from the outside world, that traffic is being sent to one or more non-ACLd public IP addresses.
If you run k8s on bare metals, those BMs can have public IPs, and you can just run ingress on one or more of them.
A managed k8s environment, however, for security reasons, will not permit nodes to have public IPs.
Instead, managed load balancers are allowed to have public IPs. Those are configured to know the private node IPs hosting ingress for your cluster and will direct traffic accordingly.
Kubernetes services have few types, each building up on previous one : ClusterIP, NodePort and LoadBalancer. Only the last one will provision LoadBalancer in a cloud environment, so you can avoid it on GKE without fuzz. The question is, what then? Because, in best case you end up with an Ingress (I assume we expose ingress as in your question), that is available on volatile IPs (nodes can be rolled at any time and new ones will get new IPs) and high ports given by NodePort service. Meaning that not only you have no fixed IP to use, but also you would need to open something like http://:31978, which obviously is crap. Hence, in cloud, you have a simple solution of putting a cloud load balancer in front of it with LoadBalancer service type. This LB will ingest the traffic on port 80/443 and forward it to correct backing service/pods.