Automated way to open NodePort range in Network Security Group - kubernetes

I have some pods that have an associated NodePort service that I would like to expose to the world. However, I am not in control of setting the value of the NodePort, so I need to open up the full range of 30000-32767 in Azure's Network Security Group (NSG).
Manually, I have successfuly created a rule in the NSG, with a destination of the target Node's internal IP of 10.240.0.4. This then allows connection using the Node's external IP on the given NodePort. So I know it is feasible.
However, I am a little stuck on how to automate the creation of the NSG rule. Typically, you could define a LoadBalancer service type, which would cause Azure to create the rule in the NSG and expose it. However this does also mean a different IP is given to the LoadBalancer, and in this case, I can't create a LoadBalancer for all the NodePorts as I am not in control of this deployment.
I've looked at Terraform, and it seems possible to configure an NSG rule; but I can't seem to locate if it's possible to get the target Node's (there are multiple, only one has an external IP) internal IP. This would also not be an ideal solution if we have automatic scaling on the nodes, as new rules would need to be defined.
Am I missing something obvious, where I can instruct the NSG to open any created NodePort for a Node that is public/is marked as having enable_node_public_ip? The Microsoft documentation doesn't add anything further to the public IP information.

Related

Can't access NodePort service in OVH Managed Kubernetes cluster

In my OVH Managed Kubernetes cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <node-ip>:<node-port>.
I followed this tutorial: Creating a service for an application running in two pods. I can successfully access the service on localhost:<target-port> along with kubectl port-forward, but it doesn't work on <node-ip>:<node-port> (request timeout) (though it works from inside the cluster).
The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.
The security group seems to allow any traffic:
The solution is to NOT enable "Private network attached" ("réseau privé attaché") when you create the managed Kubernetes cluster.
If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("réinitialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et réinstaller les noeuds") and at the "Private network attached" ("Réseau privé attaché") option, choose "None (public IPs)" ("Aucun (IPs publiques)")
I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:
By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen
Now i got my Traefik ingress as a DaemonSet using hostNetwork and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)
Well i can't help any further i guess, but i would check the following:
Are you using the public node ip address?
Did you configure you service as Loadbalancer properly?
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Do you have a loadbalancer and set it up properly?
Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster
Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).
This page explains very well:
What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
In AWS, you have things called security groups... you may have the same kind of thing in you k8s provider (or even your local machine). Please add those ports to the security groups or local firewalls. In AWS you may need to bind those security groups to your EC2 instance (Ingress node) as well.

Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?

For instance, I have a bare-metal cluster with 3 nodes ich with some instance exposing the port 105. In order to expose it on external Ip address I can define a service of type NodePort with "externalIPs" and it seems to work well. In the documentation it says to use a load balancer but I didn't get well why I have to use it and I worried to do some mistake.
Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?
You don't have to use it, it's up to you to choose if you would like to use NodePort or LoadBalancer.
Let's start with the difference between NodePort and LoadBalancer.
NodePort is the most primitive way to get external traffic directly to your service. As the name implies, it opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
LoadBalancer service is the standard way to expose a service to the internet. It gives you a single IP address that will forward all traffic to your service.
You can find more about that in kubernetes documentation.
As for the question you've asked in the comment, But NodePort with "externalIPs" option is doing exactly the same. I see only one tiny difference is that the IP should be owned by one of the cluster machin. So where is the profit of using a loadBalancer? let me answer that more precisely.
There are the advantages & disadvantages of ExternalIP:
The advantages of using ExternalIP is:
You have full control of the IP that you use. You can use IP that belongs to your ASN >instead of a cloud provider’s ASN.
The disadvantages of using ExternalIP is:
The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.
There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention
Summarizing the pros and cons of both, we can conclude that ExternalIP is not made for a production environment, it's not highly available, if node dies the service will be no longer reachable and you will have to manually fix that.
With a LoadBalancer if node dies the service will be recreated automatically on another node. So it's dynamically provisioned and there is no need to configure it manually like with the ExternalIP.

the same IP for pod and for service external IP

I took IP of pod and assign it to externalIP of service. Also I tried to assign not assigned IP to it. It works in any way and I am not able to find any side effects. Do you see any possible issue with such solution?
The external IP field of a service is only used for tracking purposes, it is descriptive rather than proscriptive. You could put whatever you want there, the only thing I know of which uses that field is external-dns, beyond that it’s only for humans so the system can report back what the IP or hostname is with type LoadBalancer.
As mentioned in the Kubernetes Service documentation:
externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
ExternalIP is good when you want to have the control of your service IP's, from the other side, the High availability will be comprimissed, since if one node of the cluster die you will lost the route to the service.
In this blog there's a good explantion about ExternalIP

How to assign a single static source IP address for all pods of a service or deployment in kubernetes?

Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.

Losing the external IPs after K8s update on GCP

On doing K8s updates on GCP we lose the link between the nodes and their external IPs. That causes some issues afterwards on K8s apps communicating with other clouds secured by firewalls.
I have to assign them manually afterwards again. Why is this? Can I prevent this somehow?
First of all, ensure you have set your IP to static in the cloud console -> Networking -> External IP addresses.
Once it's set to static you can assign your Service to the static IP using the loadBalancerIP property. Note that your Service should be a LoadBalancer type. See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer for more information.
If you don't require a Loadbalancer you could also try out https://kubernetes.io/docs/concepts/services-networking/service/#external-ips