networking addon for google kubernetes engine - kubernetes

I was just checking the network driver used for google kubernetes engine. It seems calico is the default GKE driver for network policy.
networkPolicyConfig: {}
clusterIpv4Cidr: 172.31.92.0/22
createTime: '2022-01-18T19:41:27+00:00'
--
networkPolicy:
enabled: true
provider: CALICO
Is it possible to change calico and replace with some other networking addon for gke ?

Calico is only used for Network Policies in GKE. By default GKE uses a Google Network Plugin. You also have the option to enable Dataplane V2 which is eBPF Based.
In both cases the Plugins are managed by Google and you cannot change them

To complement #boredabdel's answer;
You cannot change the network plugin, however if you choose to disable Network Policy:
Note that this connectivity differs drastically depending on whether you use GKE's native Container Network Interface (CNI) or choose to use Calico's implementation by enabling Network policy when you create the cluster.
If you use GKE's CNI, one end of the Virtual Ethernet Device (veth) pair is attached to the Pod in its namespace, and the other is connected to the Linux bridge device cbr0.1 In this case, the following command shows the various Pods' MAC addresses attached to cbr0:
arp -n
Running the following command in the toolbox container shows the root namespace end of each veth pair attached to cbr0:
brctl show cbr0
If Network Policy is enabled, one end of the veth pair is attached to the Pod and the other to eth0. In this case, the following command shows the various Pods' MAC addresses attached to different veth devices:
arp -n
Running the following command in the toolbox container shows that there is not a Linux bridge device named cbr0:
brctl show
The iptables rules that facilitate forwarding within the cluster differ from one scenario to the other. It is important to have this distinction in mind during detailed troubleshooting of connectivity issues.
Also have a look at the documentation regarding Migrating from Calico to Dataplane v.2 which may affect networking also.
Additionally you may also find Network overview for GKE documentation usefull.
Here's also a very detailed explanation of a networking inside GKE.

Related

Is there a way to Prevent inter-namespace communication of pods in Kubernetes without using network policy

I am setting up hybrid cluster(master-centos and 2 worker nodes-windows 2019) with containerd as runtime. I cannot use any CNI like calico and weave as they need docker as runtime.I can use Flannel but it does not support network policies well. Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Is there a way to prevent inter-namespace communication of pods in Kubernetes WITHOUT using network policy?
Network policies was create for that exact purpose and as per documents you need CNI that supports them. In other way they will be ignored.
Network policies are implemented by the network plugin.
To use network policies, you must be using a networking solution which
supports NetworkPolicy. Creating a NetworkPolicy resource without a
controller that implements it will have no effect.
If your only option is to use flannel for networking, you can install Calico network policy to secure cluster communications. So basically you are installing calico for policy and flannel for networking commonly known as Canal. You can find more details in calico docs
Here's also a good answer how to setup calico with containerd that you might find useful for your case.
As Flannel is L2 networking solution only thus no support for NetworkPolicy (L3/L4) you can implement security on the service level (any form of authorization like user/pass, certificate, saml, oauth etc.).
But without NetworkPolicy one will loose firewall like security which may not be what you want.

What combination of firewall rules adapted for kubernetes with flannel as a CNI

I have been trying to find the right firewall rules to apply on a kubernetes kubeadm cluster. with flannel as CNI.
I opened these ports:
6443/tcp, 2379/tcp, 2380/tcp, 8285/udp, 8472/udp, 10250/tcp, 10251/tcp, 10252/tcp, 10255/tcp, 30000-32767/tcp.
But I always end up with a service that cannot reach other services, or myself not able to reach the dashboard unless I disable the firewall. I always start with a fresh cluster.
kubernetes version 1.15.4.
Is there any source that list suitable rules to apply on cluster created by kubeadm with flannel running inside containers ?
As stated in Kubeadm system requeriments:
Full network connectivity between all machines in the cluster (public or private network is fine)
It's a very common practice is to put all custom rules on the Gateway(ADC) or into Cloud Security Groups, to you prevent conflicting rules.
Then you have to Ensure IP Tables tooling does not use the NFTables backend.
Nftables backend is not compatible with the current Kubeadm packages: it causes duplicated firewall rules and breaks kube-proxy.
And ensure required ports are open between all machines of the Cluster.
Other security measures should be deployed through other components, like:
Network Policy (Depending on the network providers)
Ingress
RBAC
And Others.
Also check the articles about Securing a Cluster and Kubernetes Security - Best Practice Guide.

In k8s1.16 when using kubenet and dual-stack, How to config to ping cbr0 gw on another node?

I installed a kubernetes v1.16 cluster with two nodes, and enabled "IPv4/IPv6 dual-stack", following this guide. For "dual-stack", I set --network-plugin=kubenet to kubelet.
Now, the pods have ipv4 and ipv6 address, and each node has a cbr0 gw with both ipv4 and ipv6 address.
But when I ping from a node to the cbr0 gw of other node, it failed.
I tried to add route as follow manually:
"ip route add [podCIDR of other node] via [ipaddress of other node]"
After I added the route on two nodes, I can ping cbr0 gw successful with ipv4.
But "adding route manually" does not seem to be a correct way.
When I use kubenet, how should I config to ping from a node to the cbr0 gw of other node?
Kubenet is a requirement for enabling IPv6 and as you stated, kubenet have some limitations and here we can read:
Kubenet is a very basic, simple network plugin, on Linux only. It does
not, of itself, implement more advanced features like cross-node
networking or network policy. It is typically used together with a
cloud provider that sets up routing rules for communication between
nodes, or in single-node environments.
I would like to highlight that kubenet is not creating routes automatically for you.
Based on this information we can understand that in your scenario this is the expected behavior and there is no problem happening.
If you want to keep going in this direction you need to create routes manually.
It's important to remember this is an alpha feature (WIP).
There is also some work being done to make it possible to bootstrap a Kubernetes cluster with Dual Stack using Kubeadm, but it's not usable yet and there is no ETA for it.
There are some examples of IPv6 and dual-stack setups with other networking plugins in this repository, but it still require adding routes manually.
This project serves two primary purposes: (i) study and validate ipv6
support in kubernetes and associated plugins (ii) provide a dev
environment for implementing and testing additional functionality
(e.g.dual-stack)

CIDR Address and advertise-address defining in Kubernetes Installation

I am trying to install Kubernetes in my on-premise server Ubuntu 16.04. And referring following documentation ,
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
After installing kubelete kubeadm and kubernetes-cni I found that to initiate kubeadm with following command,
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.133.15.28 --kubernetes-version stable-1.8
Here I am totally confused about why we are setting cidr and api server advertise address. I am adding few confusion from Kubernetes here,
Why we are specifying CIDR and --apiserver-advertise-address here?
How I can find these two address for my server?
And why flannel is using in Kubernetes installation?
I am new to this containerization and Kubernetes world.
Why we are specifying CIDR and --apiserver-advertise-address here?
And why flannel is using in kubernetes installation?
Kubernetes using Container Network Interface for creating a special virtual network inside your cluster for communication between pods.
Here is some explanation "why" from documentation:
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model.
So, Flannel is one of the CNI which can be used for create network which will connect all your pods and CIDR option define a subnet for that network. There are many alternative CNI with similar functions.
If you want to get more details about how network working in Kubernetes you can read by link above or, as example, here.
How I can find these two address for my server?
API server advertise address has to be only one and static. That address using by all components to communicate with API server. Unfortunately, Kubernetes has no support of multiple API server addresses per master.
But, you can still use as many addresses on your server as you want, but only one of them you can define as --apiserver-advertise-address. The only one request for it - it has to be accessible from all your nodes in cluster.

Any way to access Calico network by non-Calico nodes

I am very new to Calico and Calico networking, so far I went through the Calico docs.
My question is, is there any way to access Calico network by non-Calico nodes?
Went through all the docs, but haven't found any solution, am I missing something ?
If you check the documentation here https://docs.projectcalico.org/v2.6/usage/external-connectivity , you will find, it is mentioned there in Inbound connectivity part:-
BGP peering into your network infrastructure, or using orchestrator specific options..
But if you want to get simple connectivity, a better option is to run calico/node service and calicoctl command line tool can be used to launch calico/node container,
which is configured to connect to the datastore being used, on a non-calico node.
That will cause the routes to be distributed to the host and then it would be able to access the workloads.
Found similar ref: https://github.com/projectcalico/calico/issues/858
Hope this helps you