Redis Cluster Client doesn't work with Redis cluster on GKE - kubernetes

My setup has a K8S Redis cluster with 8 nodes and 32 pods across them and a load balancer service on top.
I am using a Redis cluster client to access this cluster using the load balancer's external IP. However, when handling queries, as part of Redis cluster redirection (MOVED / ASK), the cluster client receives internal IP addresses of the 32 Pods, connection to which fails within the client.
For example, I provide the IP address of the load balancer (35.245.51.198:6379) but the Redis cluster client throws errors like -
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host 10.32.7.2:6379, which is an internal Pod IP.
Any ideas about how to deal with this situation will be much appreciated.
Thanks in advance.

If you're running on GKE, you can NAT the Pod IP using the IP masquerade agent:
Using IP masquerading in your clusters can increase their security by preventing individual Pod IP addresses from being exposed to traffic outside link-local range (169.254.0.0/16) and additional arbitrary IP ranges
Your issue specifically is that, the pod range is on 10.0.0.0/8, which is by default a non-masquerade CIDR.
You can change this using a ConfigMap to treat that range as masquerade so that it picks the node's external IP as source address.
Alternatively, you can change the pod range in your cluster to anything that is masked.

I have been struggling with the same problem in installing the bitnami/redis-cluster on gke.
In order to have the right networking settings you should create the cluster setting as a public cluster
The equivalent command line for creating the cluster in MYPROJECT is:
gcloud beta container --project "MYPROJECT" clusters create "redis-cluster" --zone "us-central1-c" --no-enable-basic-auth --cluster-version "1.21.5-gke.1802" --release-channel "regular" --machine-type "e2-medium" --image-type "COS_CONTAINERD" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --no-enable-ip-alias --network "projects/MYPROJECT/global/networks/default" --subnetwork "projects/oddsjam/regions/us-central1/subnetworks/default" --no-enable-intra-node-visibility --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --workload-pool "myproject.svc.id.goog" --enable-shielded-nodes --node-locations "us-central1-c"
Then you need to create as many External IP addresses in the Network VPC product. Those IP addresses will be picked by the Redis nodes automatically.
Then you are ready to get the values.yaml of the Bitnami Redis Cluster Helm chart and change the conf accordingly to your use case. Add the list of external ips you created to the cluster.externalAccess.loadBalancerIP value.
Finally, you can run the command to install a Redis cluster on GKE by running
helm install cluster-name -f values.yaml bitnami/redis-cluster
This command will give you the password of the cluster. you can use redis-client to connect to the new cluster with:
redis-cli -c -h EXTERNAL_IP -p 6379 -a PASSWORD

Related

Accessing etcd metrics a pod

I'm trying to launch a prometheus pod in order to scrape the etcd metrics from within our kubernetes cluster.
I was trying to reproduce the solution proposed here: Access etcd metrics for Prometheus
Unfortunately, the etcd containers seem to be unavailable from the cluster.
# nc -vz etcd1 2379
nc: getaddrinfo for host "etcd1" port 2379: Name or service not known
In a way, this seems logical since no etcd container appear in the cluster:
kubectl get pods -A | grep -i etcd does not return anything.
However, when I connect onto the machine hosting the master nodes, I can find the containers using the docker ps command.
The cluster has been deployed using Kubespray.
Do you know if there is a way to reach the etcd containers from the cluster pods?
Duh… the etcd container is configured with the host network. Therefore, the metrics endpoint is directly accessible on the node.

How to assign VPC subnet(GCP) to kubernetes cluster?

I have VPC setup on Google Cloud which has 192.0.0.0/24 as subnet in which I am trying to setup k8s cluster for following servers ,
VM : VM NAME : Internal IP
VM 1 : k8s-master : 192.24.1.4
VM 2 : my-machine-1 : 192.24.1.1
VM 3 : my-machine-2 : 192.24.1.3
VM 4 : my-machine-3 : 192.24.1.2
Here k8s-master would act as a master and all other 3 machines would act as nodes. I am using following command to initilize my cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ( need to change this to vpc subnet) --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=k8s-master-external-ip
I am using flannel for which I am using following command to setup network for my cluster,
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now whenever I am deploying a new pod, k8s is assigning IP from 10.244.0.0/16 to that pod which is not accessible from my eureka as my eureka server is running on Google Cloud VPC cidr.
I want to configure k8s such that it will use vpc subnet IP ( internal IP of the machine where pod is deployed ).
I even tried to manually download kube-flannel.yml and change cidr to my subnet but that did not solve my problem.
Need help to resolve this. Thanks in advance.
Kubernetes needs 3 subnets.
1 Subnet for the nodes (this would your vpc subnet 192.168.1.0/24)
1 Subnet for your pods.
Optionally 1 Subnet for Services.
These subnets cannot be the same they have to be different.
I believe what's missing in your case are routes to make the pods talk to each other. Have a look at this guide for setting up the routes you need

kube-apiserver is not using docker-dns

I'm using kubespray 2.14 to deploy a k8s cluster. Most of the configuration is default. I've configured OIDC authentication for kubectl. I'm using Keycloak as a locally deployed auth server. Traffic is secured by a self-signed certificate and the domain keycloak.example.com is resolved by the local dns server. I've added a CoreDNS external zone for the example.com domain.
kube-apiserver uses the host network, so names are not resolved by CoreDNS by default.
Everything works fine as long as I use resolvconf_mode: host_resolvconf. Then the coredns address is added to the hosts /etc/resolv.conf file and kube-apiserver uses CoreDNS to resolve custom domain. But this mode makes my cluster highly unstable. I don't want to go deep into this problem because I spent too mach time on it already.
To fix the stability issue, I went back to the default resolvconf_mode: docker_dns, but then i have a OIDC problem.
oidc authenticator: initializing plugin: Get https://keycloak.example.com/auth/realms/test/.well-known/openid-configuration: dial tcp: lookup keycloak.example.com on 8.8.8.8:53: no such host
kube-apiserver can't resolve keycloak.example.com domain because it queires nameservers from the host (8.8.8.8). I thought it should query docker_dns as it is stated in the kubespray documentation:
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md
For hostNetwork: true PODs however, k8s will let docker setup DNS settings. Docker containers which are not started/managed by k8s will also use these docker options.
Is there a way to configure kubespray inventory to fix this problem without manually adding a nameserver to each master node? Here is my part of the inventory config:
kube_oidc_url: https://keycloak.example.com/auth/realms/test
kube_oidc_client_id: cluster
resolvconf_mode: docker_dns
upstream_dns_servers:
- 192.168.30.47
coredns_external_zones: &external_zones
- zones:
- example.com:53
nameservers:
- 192.168.30.47
cache: 5
nodelocaldns_external_zones: *external_zones
docker-dns.conf
[Service]
Environment="DOCKER_DNS_OPTIONS=\
--dns 10.233.0.3 --dns 192.168.30.47 --dns 8.8.8.8 \
--dns-search default.svc.cluster.local --dns-search svc.cluster.local \
--dns-opt ndots:2 --dns-opt timeout:2 --dns-opt attempts:2 \
"

Calico IPs Confusion

I am bit confused about Calico IPs :
If I add calico to kubernetes cluster using
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
The CALICO_IPV4POOL_CIDR is 192.168.0.0/16
So IP Range is 192.168.0.0 to 192.168.255.255
Now I have initiated the cluster using :
kubeadm init --pod-network-cidr=20.96.0.0/12 --apiserver-advertise-address=192.168.56.30
So, now pods will have IP address (using pod network CIDR) will be between: 20.96.0.0 to 20.111.255.255
What are these two different IPs. My Pods are getting IP addresses 20.96.205.192 and so on.
The CALICO_IPV4POOL_CIDR is #commented by default, look at these lines in calico.yaml:
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
For all effects, unless manually modified before deployment, those lines are not considered during deployment.
Another important line in the yaml itself is:
# Pod CIDR auto-detection on kubeadm needs access to config maps.
This confirms that the CIDR is obtained from the cluster, not from calico.yaml.
What are these two different IPs? My Pods are getting IP addresses 20.96.205.192 and so on.
Kubeadm supports many Pod network add-ons, Calico is one of those. Calico on the other hand is supported by many kinds of deployment, kubeadm is just one of those.
Kubeadm --pod-network-cidr in your deployment is the correct way to define the pod network CIDR, this is why the range 20.96.0.0/12 is effectively used.
CALICO_IPV4POOL_CIDR is required for other kinds of deployment that does not specify the CIDR pool reservation for pod networks.
Note:
The range 20.96.0.0/12 is not a Private Network range, and it can cause problems if a client with a Public IP from that range tries to access your service.
The classless reserved IP ranges for Private Networks are:
10.0.0.0/8 (16.777.216 addresses)
172.16.0.0/12 (1.048.576 addresses)
192.168.0.0/16 (65.536 addresses)
You can use any subnet size inside these ranges for your POD CIDR Network, just make sure it doesn't overlaps with any subnet in your network.
Additional References:
Calico - Create Single Host Kubernetes Cluster with Kubeadm
Kubeadm Calico Installation
IETF RFC1918 - Private Address Space

Change kubeadm init ip address

How to change ip when I run kubeadm init? I create master node on google compute engine and want to connect node from aws and azure, but kubeadm use internal ip address which see only from google cloud platform network. I tried to use --apiserver-advertise-address=external ip, but in this case kubeadm stuck in [init] This might take a minute or longer if the control plane images have to be pulled. Firewall are open.
If I understand correctly what you are trying to do is using a GCP instance running kubeadm as the master and two nodes located on two other clouds.
What you need for this to work is to have a working load balancer with external IP pointing to your instance and forwarding the TCP packets back and forth.
First I created a static external IP address for my instance:
gcloud compute addresses create myexternalip --region us-east1
Then I created a a target pool for the LB and added the instance :
gcloud compute target-pools create kubernetes --region us-east1
gcloud compute target-pools add-instances kubernetes --instances kubeadm --instances-zone us-east1-b
Add a forwarding rule serving on behalf of an external IP and port range that points to your target pool. You'll have to do this for the ports the nodes need to contact your kubeadm instance on. Use the external IP created before.
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region us-east1 --ports 22 --target-pool kubernetes
You can check now your forwarding rule which will look something like this:
gcloud compute forwarding-rules describe kubernetes-forward
IPAddress: 35.196.X.X
IPProtocol: TCP
creationTimestamp: '2018-02-23T03:25:49.810-08:00'
description: ''
id: 'XXXXX'
kind: compute#forwardingRule
loadBalancingScheme: EXTERNAL
name: kubernetes-forward
portRange: 80-80
region: https://www.googleapis.com/compute/v1/projects/XXXX/regions/us-east1
selfLink: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/forwardingRules/kubernetes-forward
target: https://www.googleapis.com/compute/v1/projects/XXXXX/regions/us-east1/targetPools/kubernetes
Now you can go with the usual process to install kubeadm and set up your cluster in your instance kubeadm init took around 50 seconds on mine.
Afterwards if you got the ports correctly opened in your firewall and forwarded to your master the nodes from AWS and Azure should be able to join.
Congratulations, now you have a multicloud kubernetes cluster! :)