Any way to access Calico network by non-Calico nodes - project-calico

I am very new to Calico and Calico networking, so far I went through the Calico docs.
My question is, is there any way to access Calico network by non-Calico nodes?
Went through all the docs, but haven't found any solution, am I missing something ?

If you check the documentation here https://docs.projectcalico.org/v2.6/usage/external-connectivity , you will find, it is mentioned there in Inbound connectivity part:-
BGP peering into your network infrastructure, or using orchestrator specific options..
But if you want to get simple connectivity, a better option is to run calico/node service and calicoctl command line tool can be used to launch calico/node container,
which is configured to connect to the datastore being used, on a non-calico node.
That will cause the routes to be distributed to the host and then it would be able to access the workloads.
Found similar ref: https://github.com/projectcalico/calico/issues/858
Hope this helps you

Related

networking addon for google kubernetes engine

I was just checking the network driver used for google kubernetes engine. It seems calico is the default GKE driver for network policy.
networkPolicyConfig: {}
clusterIpv4Cidr: 172.31.92.0/22
createTime: '2022-01-18T19:41:27+00:00'
--
networkPolicy:
enabled: true
provider: CALICO
Is it possible to change calico and replace with some other networking addon for gke ?
Calico is only used for Network Policies in GKE. By default GKE uses a Google Network Plugin. You also have the option to enable Dataplane V2 which is eBPF Based.
In both cases the Plugins are managed by Google and you cannot change them
To complement #boredabdel's answer;
You cannot change the network plugin, however if you choose to disable Network Policy:
Note that this connectivity differs drastically depending on whether you use GKE's native Container Network Interface (CNI) or choose to use Calico's implementation by enabling Network policy when you create the cluster.
If you use GKE's CNI, one end of the Virtual Ethernet Device (veth) pair is attached to the Pod in its namespace, and the other is connected to the Linux bridge device cbr0.1 In this case, the following command shows the various Pods' MAC addresses attached to cbr0:
arp -n
Running the following command in the toolbox container shows the root namespace end of each veth pair attached to cbr0:
brctl show cbr0
If Network Policy is enabled, one end of the veth pair is attached to the Pod and the other to eth0. In this case, the following command shows the various Pods' MAC addresses attached to different veth devices:
arp -n
Running the following command in the toolbox container shows that there is not a Linux bridge device named cbr0:
brctl show
The iptables rules that facilitate forwarding within the cluster differ from one scenario to the other. It is important to have this distinction in mind during detailed troubleshooting of connectivity issues.
Also have a look at the documentation regarding Migrating from Calico to Dataplane v.2 which may affect networking also.
Additionally you may also find Network overview for GKE documentation usefull.
Here's also a very detailed explanation of a networking inside GKE.

Accessing cloud kubenates services on local network

Looking for a solution to enable access to services on a cloud-hosted Kubernetes cluster on local network for various developers without each machine having to use kubectl to port forward and actually have the access to the cluster.
Essentially looking for a way to run a docker container or vm which I can expose the ports or even better away to forward local network traffic to clusters dns.
Really stuck looking for solutions, any help would be really appreciated

Unable to connect to k8s cluster using master/worker IP

I am trying to install a Kubernetes cluster with one master node and two worker nodes.
I acquired 3 VMs for this purpose running on Ubuntu 21.10. In the master node, I installed kubeadm:1.21.4, kubectl:1.21.4, kubelet:1.21.4 and docker-ce:20.4.
I followed this guide to install the cluster. The only difference was in my init command where I did not mention the --control-plane-endpoint. I used calico CNI v3.19.1 and docker for CRI Runtime.
After I installed the cluster, I deployed minio pod and exposed it as a NodePort.
The pod got deployed in the worker node (10.72.12.52) and my master node IP is 10.72.12.51).
For the first two hours, I am able to access the login page via all three IPs (10.72.12.51:30981, 10.72.12.52:30981, 10.72.13.53:30981). However, after two hours, I lost access to the service via 10.72.12.51:30981 and 10.72.13.53:30981. Now I am only able to access the service from the node on which it is running (10.72.12.52).
I have disabled the firewall and added calico.conf file inside /etc/NetworkManager/conf.d with the following content:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico
What am I missing in the setup that might cause this issue?
This is a community wiki answer posted for better visibility. Feel free to expand it.
As mentioned by #AbhinavSharma the problem was solved by switching from Calico to Flannel CNI.
More information regarding Flannel itself can be found here.

Node to Pod communication doesn't work on GCP by default

I am doing the CKAD (Certified Kubernetes Application Developer) 2019 using GCP (Google Cloud Platform) and I am facing timeouts issue when trying to curl the pod from another node. I set a simple Pod with a simple Service.
Looks the firewall is blocking something ip/port/protocol but I cannot find any documentation.
Any ideas?
So after some heavy investigation with tshark and google firewall I was able to unblock myself.
If you add a new firewall rule to GPC allowing ipip protocol for your node networks (in my case 10.128.0.0/9) the curl works !!
sources: https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
You can create nodeport service and use below command to set firewall rule.
gcloud compute firewall-rules create test-node-port --allow tcp:[NODE_PORT]
Then you can access service even from outside of cluster.

Monitor Calico network policies behavior

How could I monitor the network policies behavior?
I have a k8s cluster with calico as SDN.
For example I create a network policy to deny traffic from a set of IPs.
I try to make some executions from those IPs and they fail.
Where can I see that that traffic is being rejected because a Network policy?
Thank you.
There is no such possibility by default, but you can try to follow this instruction to create a user interface that shows blocked and allowed connections in real time.
Also, Getting started with Calico could be useful.
You can find Calico logs in the /var/log/calico folder on the Calico pod.
More about logging please find here: Calico Logging.