How to install kubernetes cluster on Rancher cluster? - kubernetes

Use Rancher server on several servers:
Master
Node1
Node2
Node3
Maybe only need rancher agent on node servers.
Also want to make kubernetes cluster on these servers. So install Kubernetes master on Rancher master, install Kubernetes nodes(kubelet) on Rancher nodes. Is it right?
So, the Kubernetes nodes can't install using Rancher server but should do it by self?

You will need a Rancher Agent on any server you want Rancher to place containers on. Rancher can deploy Kubernetes for you. I believe what you want to do is add all of the nodes, including the Rancher master, to a single Cattle environment(The Default env is Cattle). When adding the Rancher Server make sure you set CATTLE_AGENT_IP=. Once the hosts are registered, you will want to set host labels on the nodes. For nodes 1,2,3 you will set the label compute=true. On the Rancher Server you will set host 2 host labels, etcd=true and orchestration=true.
Once the labels are set up. Click on Catalog and search for Kubernetes. You can probably stick with most defaults, but CHANGE plane isolation to required.
Rancher should deploy Kubernetes management servers on the same host as your Rancher Server and the remaining nodes will be Kuberenetes minions.

Related

Kube-proxy was not found in my rancher cluster

My Rancher cluster is setup around 3 weeks. Everything works fine. But there is one problem while installing MetalLB. I found there is no kubeproxy in my cluster. Even there no kube-proxy pod in every node. I could not follow installation guide to setup config map of kube-proxy
For me, it is really strange to have a cluster without kubeproxy
My setup for rancher cluster is below:
Cluster Provider: RKE
Provision and Provision : Use existing nodes and create a cluster using RKE
Network Plugin : canal
Maybe something I misunderstand. I can discover nodeport and ClusterIP in service correctly.
Finally, I find my kibe-proxy. It is process of host not docker container.
In Racher, we should edit cluster.yml to put extra args for kube-proxy. Rather will apply in every node of cluster automatically.
root 3358919 0.1 0.0 749684 42564 ? Ssl 02:16 0:00 kube-proxy --proxy-mode=ipvs --ipvs-scheduler=lc --ipvs-strict-arp=true --cluster-cidr=10.42.0.0/16

Should Windows K8s nodes have aws-node & kube-proxy pods?

I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?
kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni

Can kubernetes have different container runtimes/version in worker nodes?

We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc

How to set up pod communication between 2 different Kubernetes Cluster

I am working on a use case where I need to set up 2 Kubernetes Cluster ,and establish communication channel between 2 pods that are in separate GKE clusters.
Please suggest a solution as how to implement the same.
You can use these steps in kubernetes cluster
First cluster
1. Create deployment.
2.Expose Deployment using service type as NodePort.
3. Enable firewall rule for Port that is exposed by service.
4. List out node IP address.
Second cluster
1. Create deployment
2. In deployment you can point endpoint of first cluster service
as a environment vriables
env:
- name: SERVICE_URL
value: xx.xx.xx.xx:xxxxx
Here xx.xx.xx.xx will be your clusters node IP and xxxx will be your cluster Node Port.
Like this your 1st cluster pod will communicate with 2nd cluster pod
Consider using Istio.
Here is a detailed guide of how to:
configure a multicluster mesh with a single-network shared control
plane topology over 2 Google Kubernetes Engine clusters.
This would allow inter-cluster direct pod-to-pod communication.
Please let me know if that helped.

Is it possible to have kube-proxy without the kubernetes environment in vm pod using istio mesh expansion

I have been working on a very innovative project which involves both Kubernetes and Istio. So, I have 2-node kubernetes cluster setup with istio installed withe their side cars in the pods. I have already hosted the bookinfo application in the nodes but by using a separate VM by following the procedures given in Istio Mesh-Expansion.
So I have VM where the details and Mysqldb pods are present. The other pods are running in the k8s cluster. So Now, They communicate within a private network.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network. Hence, Can anybody suggest a way how to go about this?
All components of Kubernetes should be connected to the kube-api. Otherwise, they will not work.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network.
To access the kube-api server using Service with the private ClusterIP address, you should already have a kube-proxy. So, it is impossible to use any ClusterIP private address until you setup kube-proxy which is communicating with your kube-api by its address outside the Cluster IP range.
Kube-api can be exposed using NodePort or LoadBalancer type of service.