I want to create a globalnetworkpolicy for an interface. I am using Calico HostendPoint for the interface and defining globalnetworkpolicy for the hostendpoint. I would like to create a globalnetworkpolicy that allows only ingress from within the cluster. A sample is given here.
In-cluster traffic is the traffic from pods and from nodes.
I have the podCIDR, so I can use that to ensure that traffic from pods are allowed.
How do I allow traffic from nodes' own IPAddresses as per the link above?
What is the nodes' own IPaddresses mentioned in the link?
It is basically referring to Kubernetes node - more precisely to node resource, which is created when a calico/node instance is started. Calico automatically detects each node’s IP address and subnet, and alongside with AS association and tunnel address (IP-in-IP or VXLAN), they are listed in node resource configuration.
Related
I have a kubernetes cluster with calico. I want to prevent routing through external interfaces to reach the internal clusterIPs of the cluster. I am planning to use this.
For which interfaces should the hostendpoint be defined? Is it only the interface on which the Kubernetes was advertised or for all the external interfaces in the cluster?
You should define a HostEndpoint for every network interface that you want to block/filter traffic on, and for every node in your cluster as well, since a given HostEndpoint of this type only protects a single interface on a single node.
Also, since defining a HostEndpoint in Calico will immediately block ALL network traffic to that node and network interface (except for a few "failsafe" ports by default), make sure to have your network policies in place BEFORE you define your HostEndpoints, so the traffic you want to allow will be allowed. You will want to consider if you need to allow traffic to/from the kubelet on each node, to/from your DNS servers, etc.
A common pattern is to use HostEndpoints for public network interfaces since those are the most exposed, and not for you private network interface since ideally those are used for pod to pod and node to node traffic that your Kubernetes cluster needs in order to function properly.
The example from the article you mentioned has it: spec.interfaceName: eth0. Have you tried it so far?
For each host point that you want to secure with policy, you must create a HostEndpoint object. To do that, you need the name of the Calico node on the host that owns the interface; in most cases, it is the same as the hostname of the host.
In the following example, we create a HostEndpoint for the host named my-host with the interface named eth0, with IP 10.0.0.1. Note that the value for node: must match the hostname used on the Calico node object.
When the HostEndpoint is created, traffic to or from the interface is dropped unless policy is in place.
apiVersion: projectcalico.org/v3
kind: HostEndpoint
metadata:
name: my-host-eth0
labels:
role: k8s-worker
environment: production
spec:
interfaceName: eth0
node: my-host
expectedIPs: ["10.0.0.1"]
I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).
Is this the expected behavior?
Is it a Kubernetes limitation or a security feature?
For debugging etc., we might need to access the services from the node. How can I achieve it?
No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. ClusterIP service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in Kubernetes documentation.
Services are not node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal port: while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.
EDIT: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no NetworkPolicy resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called default-allow behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above.
If one or more NetworkPolicy is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, NetworkPolicythat both selects the pod and has "Ingress"/"Egress" in its policyTypes) - default-deny behavior.
What is more:
Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.
So yes, it is expected behavior for Kubernetes NetworkPolicy - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of NetworkPolicy defined.
To be compatible with it, Calico network policy follows the same behavior for Kubernetes pods.
NetworkPolicy is applied to pods within a particular namespace - either the same or different with the help of the selectors.
As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of ipBlock in pod/service NetworkPolicy - particular IP ranges are selected to allow as ingress sources or egress destinations for pod/service.
Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described here.
Can someone tell me why the service hop won't become a single point of failure?
In Kubernete Service, I see an hop of Service between the client and Pods:
I guess all service's (let's say there are 5000 of services and each service has 3 Pods) routing info are stored in the IPTable of each node?
Kubernetes services connect a set of pods to an abstracted service name and IP address. Services provide discovery and routing between pods.
It depends upon the CNI which you are using and what type of network it will use. Every network plugin has a different approach for how a Pod IP address is assigned (IPAM), how iptables rules and cross-node networking are configured, and how routing information is exchanged between the nodes.
I'm trying to wrap my head around exposing internal loadbalancing to outside world on bare metal k8s cluster.
Let's say we have a basic cluster:
Some master nodes and some worker nodes, that has two interfaces, one public facing (eth0) and one local(eth1) with ip within 192.168.0.0/16 network
Deployed MetalLB and configured 192.168.200.200-192.168.200.254 range for its internal ips
Ingress controller with its service with type LoadBalancer
MetalLB now should assign one of the ips from 192.168.200.200-192.168.200.254 to ingress service, as of my current understanding.
But I have some following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
Assuming that we are talking about Metallb using Layer2.
Addressing the following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
Dividing the solution on the premise of preserving the source IP, this question could go both ways:
Preserve the source IP address
To do that you would need to set the Service of type LoadBalancer of your Ingress controller to support "Local traffic policy" by setting (in your YAML manifest):
.spec.externalTrafficPolicy: Local
This setup will be valid as long as on each Node there is replica of your Ingress controller as all of the networking coming to your controller will be contained in a single Node.
Citing the official docs:
With the Local traffic policy, kube-proxy on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.
Because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.
The downside of this policy is that incoming traffic only goes to some pods in the service. Pods that aren’t on the current leader node receive no traffic, they are just there as replicas in case a failover is needed.
Metallb.universe.tf: Usage: Local traffic policy
Do not preserve the source IP address
If your use case does not require you to preserve the source IP address, you could go with the:
.spec.externalTrafficPolicy: Cluster
This setup won't require that the replicas of your Ingress controller will be present on each Node.
Citing the official docs:
With the default Cluster traffic policy, kube-proxy on the node that received the traffic does load-balancing, and distributes the traffic to all the pods in your service.
This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the service’s leader node.
Metallb.universe.tf: Usage: Cluster traffic policy
Addressing the 2nd question:
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Metallb listen by default on all interfaces, all you need to do is to specify the address pool from this eth within Metallb config.
You can find more reference on this topic by following:
Metallb.universe.tf: FAQ: In layer 2 mode how to specify the host interface for an address pool
An example of such configuration, could be following:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools: # HERE
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.1.240/28
I have bare metal Kubernetes cluster with haproxy ingress controller (daemon set) on external ip. Is it possible to restrict kube-proxy to route to local haproxy ingress pod?
To be more specific, I have 2 pods of haproxy ingress controller and use one external ip for them. As per my understanding, kube-proxy will be routing in round-robin to the pods. I didn't find any way to restrict this particular behaviour.
Set externalTrafficPolicy: Local in the NodePort Service.
This will make it so that traffic going to a node X will only go to the pod in node X. If there is no pod in node X the traffic will be dropped (but this should not be an issue since you're using a DaemonSet).
Another benefit is that this preserves the true source IP that haproxy sees. Without externalTrafficPolicy, it is possible that haproxy sees the source IP of another node instead of the original one, since nodes can proxy traffic.
More info here