How to set up pod communication between 2 different Kubernetes Cluster - kubernetes

I am working on a use case where I need to set up 2 Kubernetes Cluster ,and establish communication channel between 2 pods that are in separate GKE clusters.
Please suggest a solution as how to implement the same.

You can use these steps in kubernetes cluster
First cluster
1. Create deployment.
2.Expose Deployment using service type as NodePort.
3. Enable firewall rule for Port that is exposed by service.
4. List out node IP address.
Second cluster
1. Create deployment
2. In deployment you can point endpoint of first cluster service
as a environment vriables
env:
- name: SERVICE_URL
value: xx.xx.xx.xx:xxxxx
Here xx.xx.xx.xx will be your clusters node IP and xxxx will be your cluster Node Port.
Like this your 1st cluster pod will communicate with 2nd cluster pod

Consider using Istio.
Here is a detailed guide of how to:
configure a multicluster mesh with a single-network shared control
plane topology over 2 Google Kubernetes Engine clusters.
This would allow inter-cluster direct pod-to-pod communication.
Please let me know if that helped.

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

Configure keepalived for services (NodePort) on kubernates

I have a k8s cluster which contains 2 nodes. And in the cluster I deployed 2 pods for the same application. Due to some reason I have to deploy a service (NodePort IP) for each pod, so totally I have 2 services the application, for example the service NodePort IP is 192.142.1.11 and 192.142.1.12. And use these 2 ips I can access the application from any node.
Now I am going to use keepalived to set up HA for the application. So:
What's the best practice to install the keepalived service? On each k8s node or deploy it as pod?
How to configure the interface in the keepalived.conf file? You know the NodePort ips are configured on kube-ipvs0 interface created by k8s and its status is down. Seems it cannot be used as the interface in keepalived.conf. Should I use the Node external interface if I start keepalived service on each node?
Thanks for your help.
If your final goal is masters HA / users service load balancing in on-prem environment, then you can take a look on this two project:
Kubevip: can do both (HA masters + LoadBalancer type for user workload).
Metallb:
user workload LoadBalancer

EKS provisioned LoadBalancers reference all nodes in the cluster. If the pods reside on 1 or 2 nodes is this efficient as the ELB traffic increases?

In Kubernetes (on AWS EKS) when I create a service of type LoadBalancer the resultant EC2 LoadBalancer is associated with all nodes (instances) in the EKS cluster even though the selector in the service will only find the pods running on 1 or 2 of these nodes (ie. a much smaller subset of nodes).
I am keen to understand is this will be efficient as the volume of traffic increases.
I could not find any advice on this topic and am keen to understand if this the correct approach.
This could introduce additional SNAT if the request arrives at the node which the pods is not running on and also does not preserve the source IP of the request. You can change externalTrafficPolicy to Local which only associates nodes have pods running to the LoadBalancers.
You can get more information from the following links.
Perserve source IP
EKS load balancer support
On EKS, if you are using AWS CNI, which is default for EKS, then you can use aws-alb-ingress-loadbalancer to create ELB & ALB.
While creating loadbalancer you can use below annotation, then traffic is only routed to your pods.
alb.ingress.kubernetes.io/target-type: ip
Reference:
https://github.com/aws/amazon-vpc-cni-k8s
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#target-type

redis deployment on Kubernetes with sentinel

I am deploying Redis and a sentinel architecture on Kubernetes.
when I work with deployments are my cluster that requires redis all is working fine.
the problem is that some services of my deployment are located on a different kubernetes cluster.
when the clients reach the redis sentinel ( which I exposed via NodePort that maps internally to 26379) they get an reply the master IP.
that actually happens is that they are getting the redis Master kubernetes IP and the internal port 6379.
as I said while working in KUbernetes that works fine since the clients can access that IP but when the a services are external it is not reachable.
I found that there is a configuration named:
cluster-announce-ip and cluster-announce-ip
I have set those values to the external IP of the cluster and the external port hoping that it will solve the problem but still no change.
I am using the formal docker image : redis:4.0.11-alpine
any help would be appreciated

Kubernetes NodePort / Load Balancer / Ingress on a Multi-Master Setup: Is it necessary?

I'm fairly new to this but I'm setting up a multi-master, high availability Kubernetes cluster of at least 3 masters and a variable number of nodes. I'm trying to do this WITHOUT the use of kube-spray or any other tools, in order to learn the true ins-and-outs. I feel I have most of it down except one bit:
My understanding is:
A NodePort allocates a port to a specific service
A Load Balancer is an external resource that allows for external access
An Ingress Controller allows you to configure specific paths to services and ports.
Some points about my cluster:
The pods I deploy can run on any machine in the cluster and don't need to be externally accessible.
My masters are also worker nodes and can run pods
etcd runs on each master
My question is, do I need a NodePort/LB/Ingress Controller? I'm trying to understand why I would need any of the above. If a master is joined to an existing cluster alongside another master, the pods are distributed between them, right? Isn't that all I need? Please help me to understand as I feel I'm missing a key concept.
First of all NodePort, LoadBalancer and Ingress has nothing to do with the setting up kubernetes cluster. These three are tools to expose your apps to the outside world so that you can access those apps from outside the kubernetes cluster.
There are two parts here:
Setting up the highly available kubernetes cluster with three masters. I have written a blog on it, how to setup a multi-master kubernetes cluster, it will give you brief idea about how to setup multi master cluster in kubernetes.
https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm
Now once you have your kubernetes cluster ready, you can start deploying your applications on it (pods,services etc.). Those application you deploy might needs to be exposed to outside world, for example a website hosted on your kubernetes cluster and needs to be accessed from internet. Then these NodePort, Loadbalancer or Ingress comes into the picture. The difference between NodePort, LoadBalancer and Ingress and when to use what? is explained very well in this article here.
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Hope this gives you some clarity.
EDIT: This edit is for kubeadm config file for 1.13(see comments)
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "VIRTUAL IP"
controlPlaneEndpoint: "VIRTUAL IP"
etcd:
external:
endpoints:
- https://ETCD_0_IP:2379
- https://ETCD_1_IP:2379
- https://ETCD_2_IP:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key