I have a kubernetes cluster with calico. I want to prevent routing through external interfaces to reach the internal clusterIPs of the cluster. I am planning to use this.
For which interfaces should the hostendpoint be defined? Is it only the interface on which the Kubernetes was advertised or for all the external interfaces in the cluster?
You should define a HostEndpoint for every network interface that you want to block/filter traffic on, and for every node in your cluster as well, since a given HostEndpoint of this type only protects a single interface on a single node.
Also, since defining a HostEndpoint in Calico will immediately block ALL network traffic to that node and network interface (except for a few "failsafe" ports by default), make sure to have your network policies in place BEFORE you define your HostEndpoints, so the traffic you want to allow will be allowed. You will want to consider if you need to allow traffic to/from the kubelet on each node, to/from your DNS servers, etc.
A common pattern is to use HostEndpoints for public network interfaces since those are the most exposed, and not for you private network interface since ideally those are used for pod to pod and node to node traffic that your Kubernetes cluster needs in order to function properly.
The example from the article you mentioned has it: spec.interfaceName: eth0. Have you tried it so far?
For each host point that you want to secure with policy, you must create a HostEndpoint object. To do that, you need the name of the Calico node on the host that owns the interface; in most cases, it is the same as the hostname of the host.
In the following example, we create a HostEndpoint for the host named my-host with the interface named eth0, with IP 10.0.0.1. Note that the value for node: must match the hostname used on the Calico node object.
When the HostEndpoint is created, traffic to or from the interface is dropped unless policy is in place.
apiVersion: projectcalico.org/v3
kind: HostEndpoint
metadata:
name: my-host-eth0
labels:
role: k8s-worker
environment: production
spec:
interfaceName: eth0
node: my-host
expectedIPs: ["10.0.0.1"]
Related
I want to create a globalnetworkpolicy for an interface. I am using Calico HostendPoint for the interface and defining globalnetworkpolicy for the hostendpoint. I would like to create a globalnetworkpolicy that allows only ingress from within the cluster. A sample is given here.
In-cluster traffic is the traffic from pods and from nodes.
I have the podCIDR, so I can use that to ensure that traffic from pods are allowed.
How do I allow traffic from nodes' own IPAddresses as per the link above?
What is the nodes' own IPaddresses mentioned in the link?
It is basically referring to Kubernetes node - more precisely to node resource, which is created when a calico/node instance is started. Calico automatically detects each node’s IP address and subnet, and alongside with AS association and tunnel address (IP-in-IP or VXLAN), they are listed in node resource configuration.
I'm trying to wrap my head around exposing internal loadbalancing to outside world on bare metal k8s cluster.
Let's say we have a basic cluster:
Some master nodes and some worker nodes, that has two interfaces, one public facing (eth0) and one local(eth1) with ip within 192.168.0.0/16 network
Deployed MetalLB and configured 192.168.200.200-192.168.200.254 range for its internal ips
Ingress controller with its service with type LoadBalancer
MetalLB now should assign one of the ips from 192.168.200.200-192.168.200.254 to ingress service, as of my current understanding.
But I have some following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
Assuming that we are talking about Metallb using Layer2.
Addressing the following questions:
On every node I could curl ingress controller externalIP (as long as they are reachable on eth1) with host header attached and get a response from a service thats configured in coresponding ingress resource or is it valid only on node where Ingress pods are currently placed?
Is it possible to forward requests saving source ip address or attaching X-Forwarded-For header is the only option?
Dividing the solution on the premise of preserving the source IP, this question could go both ways:
Preserve the source IP address
To do that you would need to set the Service of type LoadBalancer of your Ingress controller to support "Local traffic policy" by setting (in your YAML manifest):
.spec.externalTrafficPolicy: Local
This setup will be valid as long as on each Node there is replica of your Ingress controller as all of the networking coming to your controller will be contained in a single Node.
Citing the official docs:
With the Local traffic policy, kube-proxy on the node that received the traffic sends it only to the service’s pod(s) that are on the same node. There is no “horizontal” traffic flow between nodes.
Because kube-proxy doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.
The downside of this policy is that incoming traffic only goes to some pods in the service. Pods that aren’t on the current leader node receive no traffic, they are just there as replicas in case a failover is needed.
Metallb.universe.tf: Usage: Local traffic policy
Do not preserve the source IP address
If your use case does not require you to preserve the source IP address, you could go with the:
.spec.externalTrafficPolicy: Cluster
This setup won't require that the replicas of your Ingress controller will be present on each Node.
Citing the official docs:
With the default Cluster traffic policy, kube-proxy on the node that received the traffic does load-balancing, and distributes the traffic to all the pods in your service.
This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the service’s leader node.
Metallb.universe.tf: Usage: Cluster traffic policy
Addressing the 2nd question:
What are my options to pass incoming external traffic to eth0 to an ingress listening on eth1 network?
Metallb listen by default on all interfaces, all you need to do is to specify the address pool from this eth within Metallb config.
You can find more reference on this topic by following:
Metallb.universe.tf: FAQ: In layer 2 mode how to specify the host interface for an address pool
An example of such configuration, could be following:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools: # HERE
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.1.240/28
I tried to create k8s cluster on aws using kops.
After create the cluster with default definition, I saw a LoadBalance has been created.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: bungee.staging.k8s.local
spec:
api:
loadBalancer:
type: Public
....
I just wondering about the reason of creating the LoadBalancer along with cluster.
Appreciate !
In the type of cluster that kops creates the apiserver (referred to as api above, a component of the Kubernetes master, aka control plane) may not have a static IP address. Also, kops can create a HA (replicated) control plane, which means there will be multiple IPs where the apiserver is available.
The apiserver functions as a central connection hub for all other Kubernetes components, for example all the nodes connect to it but also the operator humans connect to them via kubectl. For one, these configuration files do not support multiple IP address for the apiserver (as to make use of the HA setup). Plus updating the configuration files every time the apiserver IP address(es) change would be difficult.
So the load balancer functions as a front for the apiserver(s) with a single, static IP address (an anycast IP with AWS/GCP). This load balancer IP is specified in the configuration files of Kubernetes components instead of actual apiserver IP(s).
Actually, it is also possible to solve this program by using a DNS name that resolves to IP(s) of the apiserver(s) coupled with a mechanism that keeps this record updated. This solution can't react to changes of the underlying IP(s) as fast a load balancer can, but it does save you couple of bucks plus it is slightly less likely to fail and creates less dependency on the cloud provider. This can be configured like so:
spec:
api:
dns: {}
See specification for more details.
I have a Kubernetes cluster running Calico as the overlay and NetworkPolicy implementation configured for IP-in-IP encapsulation and I am trying to expose a simple nginx application using the following Service:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: nginx
I am trying to write a NetworkPolicy that only allows connections via the load balancer. On a cluster without an overlay, this can be achieved by allowing connections from the CIDR used to allocate IPs to the worker instances themselves - this allows a connection to hit the Service's NodePort on a particular worker and be forwarded to one of the containers behind the Service via IPTables rules. However, when using Calico configured for IP-in-IP, connections made via the NodePort use Calico's IP-in-IP tunnel IP address as the source address for cross node communication, as shown by the ipv4IPIPTunnelAddr field on the Calico Node object here (I deduced this by observing the source IP of connections to the nginx application made via the load balancer). Therefore, my NetworkPolicy needs to allow such connections.
My question is how can I allow these types of connections without knowing the ipv4IPIPTunnelAddr values beforehand and without allowing connections from all Pods in the cluster (since the ipv4IPIPTunnelAddr values are drawn from the cluster's Pod CIDR range). If worker instances come up and die, the list of such IPs with surely change and I don't want my NetworkPolicy rules to depend on them.
Calico version: 3.1.1
Kubernetes version: 1.9.7
Etcd version: 3.2.17
Cloud provider: AWS
I’m afraid we don’t have a simple way to match the tunnel IPs dynamically right now. If possible, the best solution would be to move away from IPIP; once you remove that overlay, everything gets a lot simpler.
In case you’re wondering, we need to force the nodes to use the tunnel IP because, if you’re suing IPIP, we assume that your network doesn’t allow direct pod-to-node return traffic (since the network won’t be expecting the pod IP it may drop the packets)
We presently have a setup where applications within our mesos/marathon cluster want to reach out to services which may or may not reside in our mesos/marathon cluster. Ingress for external traffic into the cluster is accomplished via an Amazon ELB sitting in front of a cluster of Traefik instances, which then chooses the appropriate set of container instances to load-balance to via the incoming HTTP Host header compared against essentially a many-to-one association of configured host headers against a particular container instance. Internal-to-internal traffic is actually handled by this same route as well, as the DNS record that is associated with a given service is mapped to that same ELB both internal to and external to our mesos/marathon cluster. We also give the ability to have multiple DNS records pointing against the same container set.
This setup works, but causes seemingly unnecessary network traffic and load against our ELBs as well as our Traefik cluster, as if the applications in the containers or another component were able to self-determine that the services they wished to call out to were within the specific mesos/marathon cluster they were in, and make an appropriate call to either something internal to the cluster fronting the set of containers, or directly to the specific container itself.
From what I understand of Kubernetes, Kubernetes provides the concept of services, which essentially can act as the front for a set of pods based on configuration for which pods the service should match over. However, I'm not entirely sure of the mechanism by which we can have applications in a Kubernetes cluster know transparently to direct network traffic to the service IPs. I think that some of this can be helped by having Envoy proxy traffic meant for, e.g., <application-name>.<cluster-name>.company.com to the service name, but if we have a CNAME that maps to that previous DNS entry (say, <application-name>.company.com), I'm not entirely sure how we can avoid exiting the cluster.
Is there a good way to solve for both cases? We are trying to avoid having our applications' logic have to understand that it's sitting in a particular cluster and would prefer a component outside of the applications to perform the routing appropriately.
If I am fundamentally misunderstanding a particular component, I would gladly appreciate correction!
When you are using service-to-service communication inside a cluster, you are using Service abstraction which is something like a static point which will road traffic to the right pods.
Service endpoint available only from inside a cluster by it's IP or internal DNS name, provided by internal Kubernetes DNS server. So, for communicating inside a cluster, you can use DNS names like <servicename>.<namespace>.svc.cluster.local.
But, what is more important, Service has a static IP address.
So, now you can add that static IP as a hosts record to the pods inside a cluster for making sure that they will communicate each other inside a cluster.
For that, you can use HostAlias feature. Here is an example of configuration:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "10.0.1.23"
hostnames:
- "my.first.internal.service.example.com"
- ip: "10.1.2.3"
hostnames:
- "my.second.internal.service.example.com"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
So, if you will use your internal Service IP in combination with service's public FQDN, all traffic from your pod will be 100% inside a cluster, because the application will use internal IP address.
Also, you can use upstream DNS server which will contain same aliases, but an idea will be the same.
With Upstream DNS for the separate zone, resolving will work like that:
With a new version of Kubernetes, which using Core DSN for providing DNS service, and has more features it will be a bit simpler.