How to assign VPC subnet(GCP) to kubernetes cluster? - kubernetes

I have VPC setup on Google Cloud which has 192.0.0.0/24 as subnet in which I am trying to setup k8s cluster for following servers ,
VM : VM NAME : Internal IP
VM 1 : k8s-master : 192.24.1.4
VM 2 : my-machine-1 : 192.24.1.1
VM 3 : my-machine-2 : 192.24.1.3
VM 4 : my-machine-3 : 192.24.1.2
Here k8s-master would act as a master and all other 3 machines would act as nodes. I am using following command to initilize my cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ( need to change this to vpc subnet) --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=k8s-master-external-ip
I am using flannel for which I am using following command to setup network for my cluster,
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now whenever I am deploying a new pod, k8s is assigning IP from 10.244.0.0/16 to that pod which is not accessible from my eureka as my eureka server is running on Google Cloud VPC cidr.
I want to configure k8s such that it will use vpc subnet IP ( internal IP of the machine where pod is deployed ).
I even tried to manually download kube-flannel.yml and change cidr to my subnet but that did not solve my problem.
Need help to resolve this. Thanks in advance.

Kubernetes needs 3 subnets.
1 Subnet for the nodes (this would your vpc subnet 192.168.1.0/24)
1 Subnet for your pods.
Optionally 1 Subnet for Services.
These subnets cannot be the same they have to be different.
I believe what's missing in your case are routes to make the pods talk to each other. Have a look at this guide for setting up the routes you need

Related

MetalLB unable to reach external IP address

I have a 3 nodes Rancher RKE custom cluster deployed on Rocky Linux VMs on vSphere.
I deployed MetalLB on the cluster and define IP pool from my node network.
When I create a LoadBalancer service everything looks fine and I'm getting external IP address from the pool, however I cannot reach this IP address from the node ip network, I can't even reach it from the nodes themselves, when I try to curl to the external IP address from one of the nodes I'm getting no where (No route to host).
Curl to the cluster IP or to the pod itself works fine.
Also if I create a NodePort service to the pod I can reach it with no issue from outside the cluster.
Any ideas?

How to access the cluster IP from one pod to another pod in aws EKS

I am new to AWS EKS and k8s. I am trying to implement the hyperledger network into aws eks. however into that I need to connect pods each other. When I am trying to ping from one pod to another one its not working.
Pods specification: AWS EKS cluster with 2 worker node and pods are in LInux.
How to ping from one pod clusterIP to another one?
Kubernetes service IPs are virtual IPs. You cannot ping to kubernetes service IPs as there is no endpoint for the IP that can reply to the ICMP ping message. You can only connect to one of the backing pods of the service with ip:port combination.

Redis Cluster Client doesn't work with Redis cluster on GKE

My setup has a K8S Redis cluster with 8 nodes and 32 pods across them and a load balancer service on top.
I am using a Redis cluster client to access this cluster using the load balancer's external IP. However, when handling queries, as part of Redis cluster redirection (MOVED / ASK), the cluster client receives internal IP addresses of the 32 Pods, connection to which fails within the client.
For example, I provide the IP address of the load balancer (35.245.51.198:6379) but the Redis cluster client throws errors like -
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host 10.32.7.2:6379, which is an internal Pod IP.
Any ideas about how to deal with this situation will be much appreciated.
Thanks in advance.
If you're running on GKE, you can NAT the Pod IP using the IP masquerade agent:
Using IP masquerading in your clusters can increase their security by preventing individual Pod IP addresses from being exposed to traffic outside link-local range (169.254.0.0/16) and additional arbitrary IP ranges
Your issue specifically is that, the pod range is on 10.0.0.0/8, which is by default a non-masquerade CIDR.
You can change this using a ConfigMap to treat that range as masquerade so that it picks the node's external IP as source address.
Alternatively, you can change the pod range in your cluster to anything that is masked.
I have been struggling with the same problem in installing the bitnami/redis-cluster on gke.
In order to have the right networking settings you should create the cluster setting as a public cluster
The equivalent command line for creating the cluster in MYPROJECT is:
gcloud beta container --project "MYPROJECT" clusters create "redis-cluster" --zone "us-central1-c" --no-enable-basic-auth --cluster-version "1.21.5-gke.1802" --release-channel "regular" --machine-type "e2-medium" --image-type "COS_CONTAINERD" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --no-enable-ip-alias --network "projects/MYPROJECT/global/networks/default" --subnetwork "projects/oddsjam/regions/us-central1/subnetworks/default" --no-enable-intra-node-visibility --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --workload-pool "myproject.svc.id.goog" --enable-shielded-nodes --node-locations "us-central1-c"
Then you need to create as many External IP addresses in the Network VPC product. Those IP addresses will be picked by the Redis nodes automatically.
Then you are ready to get the values.yaml of the Bitnami Redis Cluster Helm chart and change the conf accordingly to your use case. Add the list of external ips you created to the cluster.externalAccess.loadBalancerIP value.
Finally, you can run the command to install a Redis cluster on GKE by running
helm install cluster-name -f values.yaml bitnami/redis-cluster
This command will give you the password of the cluster. you can use redis-client to connect to the new cluster with:
redis-cli -c -h EXTERNAL_IP -p 6379 -a PASSWORD

MongoDB Replica establishment running on two different kubernetes cluster on different physical host

Objective : MongoDB Replica establishment running on two different kubernetes cluster on different physical host.
Host 1 : kubernetes cluster -1 : A pod is running with let say (mongo1 instance with replication set name “my-mongo-set”). Service is created with pod running behind let say mongo1-service.
Host 2 : Kubernetes cluster-2 : Another pod is running with (mong2 instance with same replication set name “my-mongo-set”). Service is created with pod running behind let say mongo2-service.
now, i am unable to set the mongoDb in replication mode from inside the pod to another pod.
Host 1: Kubernetes-Cluster-1 Host 2 : Kubernetes-Cluster-2
Mongo1-service Mongo2-service
Mongo1-pod Mongo2-pod
I need pod to pod communication between two different kubernetes cluster node running onto different machines.
I am unable to expose the Kubernetes mongo-service IP using either type as NodePort, LoadBalancer, ingress controller(kong) etc.
I am new to kubernetes and I have installed the kubernetes (kubectl, kubeadm, kubelet) through apt-get and then run the Kubeadm init command.
Any suggestions how to achieve this …. ?

Change kubeadm init ip address

How to change ip when I run kubeadm init? I create master node on google compute engine and want to connect node from aws and azure, but kubeadm use internal ip address which see only from google cloud platform network. I tried to use --apiserver-advertise-address=external ip, but in this case kubeadm stuck in [init] This might take a minute or longer if the control plane images have to be pulled. Firewall are open.
If I understand correctly what you are trying to do is using a GCP instance running kubeadm as the master and two nodes located on two other clouds.
What you need for this to work is to have a working load balancer with external IP pointing to your instance and forwarding the TCP packets back and forth.
First I created a static external IP address for my instance:
gcloud compute addresses create myexternalip --region us-east1
Then I created a a target pool for the LB and added the instance :
gcloud compute target-pools create kubernetes --region us-east1
gcloud compute target-pools add-instances kubernetes --instances kubeadm --instances-zone us-east1-b
Add a forwarding rule serving on behalf of an external IP and port range that points to your target pool. You'll have to do this for the ports the nodes need to contact your kubeadm instance on. Use the external IP created before.
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region us-east1 --ports 22 --target-pool kubernetes
You can check now your forwarding rule which will look something like this:
gcloud compute forwarding-rules describe kubernetes-forward
IPAddress: 35.196.X.X
IPProtocol: TCP
creationTimestamp: '2018-02-23T03:25:49.810-08:00'
description: ''
id: 'XXXXX'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: kubernetes-forward
portRange: 80-80
region: https://www.googleapis.com/compute/v1/projects/XXXX/regions/us-east1
selfLink: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/forwardingRules/kubernetes-forward
target: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/targetPools/kubernetes
Now you can go with the usual process to install kubeadm and set up your cluster in your instance kubeadm init took around 50 seconds on mine.
Afterwards if you got the ports correctly opened in your firewall and forwarded to your master the nodes from AWS and Azure should be able to join.
Congratulations, now you have a multicloud kubernetes cluster! :)