the same IP for pod and for service external IP - kubernetes

I took IP of pod and assign it to externalIP of service. Also I tried to assign not assigned IP to it. It works in any way and I am not able to find any side effects. Do you see any possible issue with such solution?

The external IP field of a service is only used for tracking purposes, it is descriptive rather than proscriptive. You could put whatever you want there, the only thing I know of which uses that field is external-dns, beyond that it’s only for humans so the system can report back what the IP or hostname is with type LoadBalancer.

As mentioned in the Kubernetes Service documentation:
externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.
ExternalIP is good when you want to have the control of your service IP's, from the other side, the High availability will be comprimissed, since if one node of the cluster die you will lost the route to the service.
In this blog there's a good explantion about ExternalIP

Related

Automated way to open NodePort range in Network Security Group

I have some pods that have an associated NodePort service that I would like to expose to the world. However, I am not in control of setting the value of the NodePort, so I need to open up the full range of 30000-32767 in Azure's Network Security Group (NSG).
Manually, I have successfuly created a rule in the NSG, with a destination of the target Node's internal IP of 10.240.0.4. This then allows connection using the Node's external IP on the given NodePort. So I know it is feasible.
However, I am a little stuck on how to automate the creation of the NSG rule. Typically, you could define a LoadBalancer service type, which would cause Azure to create the rule in the NSG and expose it. However this does also mean a different IP is given to the LoadBalancer, and in this case, I can't create a LoadBalancer for all the NodePorts as I am not in control of this deployment.
I've looked at Terraform, and it seems possible to configure an NSG rule; but I can't seem to locate if it's possible to get the target Node's (there are multiple, only one has an external IP) internal IP. This would also not be an ideal solution if we have automatic scaling on the nodes, as new rules would need to be defined.
Am I missing something obvious, where I can instruct the NSG to open any created NodePort for a Node that is public/is marked as having enable_node_public_ip? The Microsoft documentation doesn't add anything further to the public IP information.

Is it possible to configure a pod to prioritize using `hostNetwork` but still reference internal service endpoints?

I have a statefulset that I need to run using the host network, purely for performance reasons. But I also want to be able to reference service-name endpoints. Is it possible to do this? ClusterFirstWithHostNet does not work because it doesn't prioritize using the host's network. The dnsConfig configuration might be promising, but I don't know how I would configure it to do what I'm asking about.
This is a community wiki answer. Feel free to expand it.
It might be possible if the app can select random port to listen during start and change if port is busy. However, Kubernetes is not involved in the selecting port for the application.
Statefulset requires a headless service, so it doesn't have an IP and works as a set of DNS records in coredns. A record would probably contain the same IP for the replicas on the same node, but SRV record may actually provide a proper endpoint.
For further reference, please take a look at the below sources:
How do I get individual pod hostnames in a Deployment registered and looked up in Kubernetes?
SRV records

Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?

For instance, I have a bare-metal cluster with 3 nodes ich with some instance exposing the port 105. In order to expose it on external Ip address I can define a service of type NodePort with "externalIPs" and it seems to work well. In the documentation it says to use a load balancer but I didn't get well why I have to use it and I worried to do some mistake.
Can somebody explain whay I have to use external (MetalLB, HAProxy etc) Load Balancer with Bare-metal kubernetes cluster?
You don't have to use it, it's up to you to choose if you would like to use NodePort or LoadBalancer.
Let's start with the difference between NodePort and LoadBalancer.
NodePort is the most primitive way to get external traffic directly to your service. As the name implies, it opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
LoadBalancer service is the standard way to expose a service to the internet. It gives you a single IP address that will forward all traffic to your service.
You can find more about that in kubernetes documentation.
As for the question you've asked in the comment, But NodePort with "externalIPs" option is doing exactly the same. I see only one tiny difference is that the IP should be owned by one of the cluster machin. So where is the profit of using a loadBalancer? let me answer that more precisely.
There are the advantages & disadvantages of ExternalIP:
The advantages of using ExternalIP is:
You have full control of the IP that you use. You can use IP that belongs to your ASN >instead of a cloud provider’s ASN.
The disadvantages of using ExternalIP is:
The simple setup that we will go thru right now is NOT highly available. That means if the node dies, the service is no longer reachable and you’ll need to manually remediate the issue.
There is some manual work that needs to be done to manage the IPs. The IPs are not dynamically provisioned for you thus it requires manual human intervention
Summarizing the pros and cons of both, we can conclude that ExternalIP is not made for a production environment, it's not highly available, if node dies the service will be no longer reachable and you will have to manually fix that.
With a LoadBalancer if node dies the service will be recreated automatically on another node. So it's dynamically provisioned and there is no need to configure it manually like with the ExternalIP.

Having 1 outgoing IP for kubernetes egress traffic

Current set-up
Cluster specs: Managed Kubernetes on Digital Ocean
Goal
My pods are accessing some websites but I want to use a proxy first.
Problem
The proxy I need to use is only taking 1 IP address in an "allow-list".
My cluster is using different nodes, with node-autoscaler so I have multiple and changing IP addresses.
Solutions I am thinking about
Setting-up a proxy (squid? nginx?) outside of the cluster (Currently not working when I access an HTTPS website)
Istio could let me set-up a gateway? (No knowledge of Istio)
Use GCP managed K8s, and follow the answers on Kubernetes cluster outgoing traffic IP. But all our stack is on Digital Ocean and the pricing is better there.
I am curious to know what is the best practice, easiest solution or if anyone experienced such use-case before :)
Best
You could set up all your traffic to go through istio-egressgateway.
Then you could manipulate the istio-egressgateway to always be deployed on the same node of the cluster, and whitelist that IP address.
Pros: super easy. BUT. If you are not using Istio already, to set up Istio just for this is may be killing a mosquito with a bazooka.
Cons: Need to make sure the node doesn't change the IP address. Otherwise the istio-egressgateway itself might not get deployed (if you do not have the labels added to the new node), and you will need to reconfigure everything for the new node (new IP address). Another con might be the fact that if the traffic goes up, there is an HPA, which will deploy more replicas of the gateway, and all of them will be deployed on the same node. So, if you are going to have lots of traffic, may be it would be a good idea to isolate one node, just for this purpose.
Another option would be as you are suggesting; a proxy. I would recommend an Envoy proxy directly. I mean, Istio is going to be using Envoy anyways right? So, just get the proxy directly, put it in a pod, do the same thing as I mentioned before; node affinity, so it will always run on the same node, so it will go out with the same IP.
Pros: You are not installing entire service mesh control plane for one tiny thing.
Cons: Same as before, as you still have the issue of the node IP change if something goes wrong, plus you will need to manage your own Deployment object, HPA, configure the Envoy proxy, etc. instead of using Istio objects (like Gateway and a VirtualService).
Finally, I see a third option; to set up a NAT gateway outside the cluster, and configure your traffic to go through it.
Pros: You won't have to configure any kubernetes object, therefor there will be no need to set up any node affinity, therefor there will be no node overwhelming or IP change. Plus you can remove the external IP addresses from your cluster, so it will be more secure (unless you have other workloads that need to reach internet directly). Also , probably having a single node configured as NAT will be more resilient then a kubernetes pod, running in a node.
Cons: May be a little bit more complicate to set up?
And there is this general Con, that you can whitelist only 1 IP address, so you will always have a single point of failure. Even NAT gateway; it still can fail.
The GCP static IP won't help you. What is suggesting the other post is to reserve an IP address, so you can re-use it always. But it's not that you will have that IP address automatically added to a random node that goes down. Human intervention is needed. I don't think you can have one specific node to have a static IP address, and if it goes down, the new created node will pick the same IP. That service, to my knowledge, doesn't exist.
Now, GCP does offer a very resilient NAT gateway. It is managed by Google, so shouldn't fail. Not cheap though.

Is it possible for me to get a log which shows the source IP of requests hitting a NodePort in my Kubernetes cluster?

I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this?
I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.
When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.
Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is X-Fordwared-for, more info here, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.