I am new to Redis and Kubernetes and have a Kubernetes cluster setup of 3 Redis and 3 Sentinel
kubernetes % kubectl -n redis get pods
NAME READY STATUS RESTARTS AGE
redis-0 1/1 Running 0 7d16h
redis-1 1/1 Running 0 7d15h
redis-2 1/1 Running 0 8d
sentinel-0 1/1 Running 0 9d
sentinel-1 1/1 Running 0 9d
sentinel-2 1/1 Running 0 9d
I have successfully connected the sentinel and Redis-master to each other and was able to test basic HA operations using Redis-cli by exec into the pods, now I wanna connect my external java application to this Redis-cluster,
Now best to my understanding we are supposed to connect to Sentinel and Sentinel will guide us Redis-master pod where write operations can be executed.
So I had a few doubts regarding this, Can I connect to any of the sentinel servers and be able to execute my operations or should I always connect to a master? if we are connecting to a sentinel and if the master Sentinel goes down, what should be the plan of action or best practices in this regard, I have read a few blogs but can't seem to reach a clear understanding ?
What should I do to be able to connect my Spring boot to connect to this cluster? I read a bit on this too and it seems I can't connect to a minikube cluster directly from my IntelliJ/local machine and need to create an image and deploy it in a same namespace, is there any workaround for this ?
below is the yaml file for my redis service
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
name: redis
ports:
- port: 6379
targetPort: 6379
Thanks
Yes you should be able to connect to any sentinel. From https://redis.io/topics/sentinel-clients
The client should iterate the list of Sentinel addresses. For every
address it should try to connect to the Sentinel, using a short
timeout (in the order of a few hundreds of milliseconds). On errors or
timeouts the next Sentinel address should be tried.
If all the Sentinel addresses were tried without success, an error
should be returned to the client.
The first Sentinel replying to the client request should be put at the
start of the list, so that at the next reconnection, we'll try first
the Sentinel that was reachable in the previous connection attempt,
minimizing latency.
If you are using a redis client library that supports sentinel, you can just pass all the redis sentinel addresses to the client and the client library takes care of connection logic for you as recommended by Redis documentation above.
Since you are on kubernetes, you can make it more simpler. Instead of deploying the sentinels as Statefulset with 3 replicas like you have done, deploy it as a Deployment with 3 replicas. Create a service object for the redis sentinel. Pass this service address as the sentinel adress to your redis client library. This way you don't need to work with multiple sentinel addresses and k8s would automatically take care of removing the sentinels that are down from the service endpoints if it is not reachable etc so all your clients don't need to discover the sentinels that are online.
You could also use a redis operator like https://github.com/spotahome/redis-operator which would takes care of deployment lifecycle of sentinel based redis HA clusters.
Related
I have an API that recently started receiving more traffic, about 1.5x. That also lead to a doubling in the latency:
This surprised me since I had setup autoscaling of both nodes and pods as well as GKE internal loadbalancing.
My external API passes the request to an internal server which uses a lot of CPU. And looking at my VM instances it seems like all of the traffic got sent to one of my two VM instances (a.k.a. Kubernetes nodes):
With loadbalancing I would have expected the CPU usage to be more evenly divided between the nodes.
Looking at my deployment there is one pod on the first node:
And two pods on the second node:
My service config:
$ kubectl describe service model-service
Name: model-service
Namespace: default
Labels: app=model-server
Annotations: networking.gke.io/load-balancer-type: Internal
Selector: app=model-server
Type: LoadBalancer
IP Families: <none>
IP: 10.3.249.180
IPs: 10.3.249.180
LoadBalancer Ingress: 10.128.0.18
Port: rest-api 8501/TCP
TargetPort: 8501/TCP
NodePort: rest-api 30406/TCP
Endpoints: 10.0.0.145:8501,10.0.0.152:8501,10.0.1.135:8501
Port: grpc-api 8500/TCP
TargetPort: 8500/TCP
NodePort: grpc-api 31336/TCP
Endpoints: 10.0.0.145:8500,10.0.0.152:8500,10.0.1.135:8500
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedLoadBalancer 6m30s (x2 over 28m) service-controller Updated load balancer with new hosts
The fact that Kubernetes started a new pod seems like a clue that Kubernetes autoscaling is working. But the pods on the second VM do not receive any traffic. How can I make GKE balance the load more evenly?
Update Nov 2:
Goli's answer leads me to think that it has something to do with the setup of the model service. The service exposes both a REST API and a GRPC API but the GRPC API is the one that receives traffic.
There is a corresponding forwarding rule for my service:
$ gcloud compute forwarding-rules list --filter="loadBalancingScheme=INTERNAL"
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
aab8065908ed4474fb1212c7bd01d1c1 us-central1 10.128.0.18 TCP us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
Which points to a backend service:
$ gcloud compute backend-services describe aab8065908ed4474fb1212c7bd01d1c1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/questions-279902/zones/us-central1-a/instanceGroups/k8s-ig--42ce3e0a56e1558c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2021-02-21T20:45:33.505-08:00'
description: '{"kubernetes.io/service-name":"default/model-service"}'
fingerprint: lA2-fz1kYug=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
id: '2651722917806508034'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: aab8065908ed4474fb1212c7bd01d1c1
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
sessionAffinity: NONE
timeoutSec: 30
Which has a health check:
$ gcloud compute health-checks describe k8s-42ce3e0a56e1558c-node
checkIntervalSec: 8
creationTimestamp: '2021-02-21T20:45:18.913-08:00'
description: ''
healthyThreshold: 1
httpHealthCheck:
host: ''
port: 10256
proxyHeader: NONE
requestPath: /healthz
id: '7949377052344223793'
kind: compute#healthCheck
logConfig:
enable: true
name: k8s-42ce3e0a56e1558c-node
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
timeoutSec: 1
type: HTTP
unhealthyThreshold: 3
List of my pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
api-server-deployment-6747f9c484-6srjb 2/2 Running 3 3d22h
label-server-deployment-6f8494cb6f-79g9w 2/2 Running 4 38d
model-server-deployment-55c947cf5f-nvcpw 0/1 Evicted 0 22d
model-server-deployment-55c947cf5f-q8tl7 0/1 Evicted 0 18d
model-server-deployment-766946bc4f-8q298 1/1 Running 0 4d5h
model-server-deployment-766946bc4f-hvwc9 0/1 Evicted 0 6d15h
model-server-deployment-766946bc4f-k4ktk 1/1 Running 0 7h3m
model-server-deployment-766946bc4f-kk7hs 1/1 Running 0 9h
model-server-deployment-766946bc4f-tw2wn 0/1 Evicted 0 7d15h
model-server-deployment-7f579d459d-52j5f 0/1 Evicted 0 35d
model-server-deployment-7f579d459d-bpk77 0/1 Evicted 0 29d
model-server-deployment-7f579d459d-cs8rg 0/1 Evicted 0 37d
How do I A) confirm that this health check is in fact showing 2/3 backends as unhealthy? And B) configure the health check to send traffic to all of my backends?
Update Nov 5:
After finding that several pods had gotten evicted in the past because of too little RAM, I migrated the pods to a new nodepool. The old nodepool VMs had 4 CPU and 4GB memory, the new ones have 2 CPU and 8GB memory. That seems to have resolved the eviction/memory issues, but the loadbalancer still only sends traffic to one pod at a time.
Pod 1 on node 1:
Pod 2 on node 2:
It seems like the loadbalancer is not splitting the traffic at all but just randomly picking one of the GRPC modelservers and sending 100% of traffic there. Is there some configuration that I missed which caused this behavior? Is this related to me using GRPC?
Turns out the answer is that you cannot loadbalance gRPC requests using a GKE loadbalancer.
A GKE loadbalancer (as well as Kubernetes' default loadbalancer) picks a new backend every time a new TCP connection is formed. For regular HTTP 1.1 requests each request gets a new TCP connection and the loadbalancer works fine. For gRPC (which is based on HTTP 2), the TCP connection is only setup once and all requests are multiplexed on the same connection.
More details in this blog post.
To enable gRPC loadbalancing I had to:
Install Linkerd
curl -fsL https://run.linkerd.io/install | sh
linkerd install | kubectl apply -f -
Inject the Linkerd proxy in both the receiving and sending pods:
kubectl apply -f api_server_deployment.yaml
kubectl apply -f model_server_deployment.yaml
After realizing that Linkerd would not work together with the GKE loadbalancer, I exposed the receiving deployment as a ClusterIP service instead.
kubectl expose deployment/model-server-deployment
Pointed the gRPC client to the ClusterIP service IP address I just created, and redeployed the client.
kubectl apply -f api_server_deployment.yaml
Google Cloud provides health checks to determine if backends respond to traffic.Health checks connect to backends on a configurable, periodic basis. Each connection attempt is called a probe. Google Cloud records the success or failure of each probe.
Based on a configurable number of sequential successful or failed probes, an overall health state is computed for each backend. Backends that respond successfully for the configured number of times are considered healthy.
Backends that fail to respond successfully for a separately configurable number of times are unhealthy.
The overall health state of each backend determines eligibility to receive new requests or connections.So one of the chances of instance not getting requests can be that your instance is unhealthy. Refer to this documentation for creating health checks .
You can configure the criteria that define a successful probe. This is discussed in detail in the section How health checks work.
Edit1:
The Pod is evicted from the node due to lack of resources, or the node fails. If a node fails, Pods on the node are automatically scheduled for deletion.
So to know the exact reason for pods getting evicted Run
kubectl describe pod <pod name> and look for the node name of this pod. Followed by kubectl describe node <node-name> that will show what type of resource cap the node is hitting under Conditions: section.
From my experience this happens when the host node runs out of disk space.
Also after starting the pod you should run kubectl logs <pod-name> -f and see the logs for more detailed information.
Refer this documentation for more information on eviction.
The example is described here - https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
The Service object for the wordpress-mysql is:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
The headless services are documented here - https://kubernetes.io/docs/concepts/services-networking/service/#headless-services The Service definition defines selectors, so I suppose the following passage applies:
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service
I have followed the example on a 3 node managed k8s cluster in Azure:
C:\work\k8s\mysql-wp-demo> kubectl.exe get ep
NAME ENDPOINTS AGE
kubernetes 52.186.94.71:443 47h
wordpress 10.244.0.10:80 5h33m
wordpress-mysql 10.244.3.28:3306 5h33m
C:\work\k8s\mysql-wp-demo> kubectl.exe get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wordpress-584f8d8666-rlbf5 1/1 Running 0 5h33m 10.244.0.10 aks-nodepool1-30294001-vmss000001 <none> <none>
wordpress-mysql-55c74969cd-4l8d4 1/1 Running 0 5h33m 10.244.3.28 aks-nodepool1-30294001-vmss000003 <none> <none>
C:\work\k8s\mysql-wp-demo>
As far as I understand there is no difference from the endpoints perspective.
Can someone explain to me - what is the point of headless services in general and in this example in particular?
A regular service has a virtual Service IP that exists as iptables or ipvs rules on each node. A new connection to this service IP is then routed with DNAT to one of the Pod endpoints, to support a form of load balancing across multiple pods.
A headless service (that isn't an ExternalName) will create DNS A records for any endpoints with matching labels or name. Connections will go directly to a single pod/endpoint without traversing the service rules.
A service with a type of ExternalName is just a DNS CNAME record in kubernetes DNS. These are headless by definition as they are names for an IP external to the cluster.
The linked myql deployment/service example is leading into StatefulSet's. This Deployment is basically a single pod statefulset. When you do move to a StatefulSet with multiple pods, you will mostly want to address individual members of the StatefulSet with a specific name (see mdaniels comment).
Another reason to set clusterIP: None is to lessen the load on iptables processing which slows down as the number of services (i.e. iptables rules) increases. Applications that don't need multiple pods, don't need the Service IP. Setting up a cluster to use IPVS alleviates the slow down issue somewhat.
I am trying to access a web api deployed into my local Kubernetes cluster running on my laptop (Docker -> Settings -> Enable Kubernetes). The below is my Pod Spec YAML.
kind: Pod
apiVersion: v1
metadata:
name: test-api
labels:
app: test-api
spec:
containers:
- name: testapicontainer
image: myprivaterepo/testapi:latest
ports:
- name: web
hostPort: 55555
containerPort: 80
protocol: TCP
kubectl get pods shows the test-api running. However, when I try to connect to it using http://localhost:55555/testapi/index from my laptop, I do not get a response. But, I can access the application from a container in a different pod within the cluster (I did a kubectl exec -it to a different container), using the URL
http://test-api pod cluster IP/testapi/index
. Why cannot I access the application using the localhost:hostport URL?
I'd say that this is strongly not recommended.
According to k8s docs: https://kubernetes.io/docs/concepts/configuration/overview/#services
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.
If you explicitly need to expose a Pod's port on the node, consider using a NodePort Service before resorting to hostPort.
So... Is the hostPort really necessary on your case? Or a NodePort Service would solve it?
If it is really necessary , then you could try using the IP that is returning from the command:
kubectl get nodes -o wide
http://ip-from-the-command:55555/testapi/index
Also, another test that may help your troubleshoot is checking if your app is accessible on the Pod IP.
UPDATE
I've done some tests locally and understood better what the documentation is trying to explain. Let me go through my test:
First I've created a Pod with hostPort: 55555, I've done that with a simple nginx.
Then I've listed my Pods and saw that this one was running on one of my specific Nodes.
Afterwards I've tried to access the Pod in the port 55555 through my master node IP and other node IP without success, but when trying to access through the Node IP where this Pod was actually running, it worked.
So, the "issue" (and actually that's why this approach is not recommended), is that the Pod is accessible only through that specific Node IP. If it restarts and start in a different Node, the IP will also change.
Precondition: the kubernetes cluster have 1 master and 2 worker. The cluster uses one CIDR for all nodes.
Question: how to configure network to pod on worker1 can communicate with pod on worker2?
Kubernetes has its own service discovery and you can use define service for communicate. If you want to communicate or send request to worker2 then you have to define a service for worker2. Suppose you have a worker add-service and you want to communicate with it, then you have to define a service for add-service worker like below
apiVersion: v1
kind: Service
metadata:
name: add-service
spec:
selector:
app: add
ports:
- port: 3000
targetPort: add-service
Then from worker1 you can user add-service to communicate and kuberntes will use service discovery to find the exact worker. Here is a hackernoon detail article about how to create pod, deployment, service and communicate with between them.
A kubernetes cluster consists of one or more nodes. A node is a host system, whether physical or virtual, with a container runtime and its dependencies (i.e. docker mostly) and several kubernetes system components, that is connected to a network that allows it to reach other nodes in the cluster. A simple cluster of two nodes might look like this:
You can find more answers here
When the cluster use one CIDR for all nodes, the pod will be assigned ip address from one subnet.
currently kubectl assigns the IP address to a pod and that is shared within the pod by all the containers.
I am trying to assign a static IP address to a pod i.e in the same network range as the one assigned by kubectl, I am using the following deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
run: rediscont
spec:
containers:
- name: redisbase
image: localhost:5000/demo/redis
ports:
- containerPort: 6379
hostIP: 172.17.0.1
hostPort: 6379
On the dockerhost where its deployed i see the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4106d81a2310 localhost:5000/demo/redis "/bin/bash -c '/root/" 28 seconds ago Up 27 seconds k8s_redisbase.801f07f1_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_1b598062
71b03cf0bb7a gcr.io/google_containers/pause:2.0 "/pause" 28 seconds ago Up 28 seconds 172.17.0.1:6379->6379/tcp k8s_POD.99e70374_redis-1139130139-jotvn_default_f1776984-d6fc-11e6-807d-645106058993_8c381981
The IP tables-save gives the following output
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379
Even with this, from other pods the IP 172.17.0.1 is not accessible.
Basically the question is how to assign static IP to a pod so that 172.17.0.3 doesn't get assigned to it
Generally, assigning a Pod a static IP address is an anti-pattern in Kubernetes environments. There are a couple of approaches you may want to explore instead. Using a Service to front-end your Pods (or to front-end even just a single Pod) will give you a stable network identity, and allow you to horizontally scale your workload (if the workload supports it). Alternately, using a StatefulSet may be more appropriate for some workloads, as it will preserve startup order, host name, PersistentVolumes, etc., across Pod restarts.
I know this doesn't necessarily directly answer your question, but hopefully it provides some additional options or information that proves useful.
Assigning static IP addresses to PODs is not possible in OSS Kubernetes. But it is possible to configure via some CNI plugins. For instance, Calico provides a way to override IPAM and use fixed addresses by annotating pod. The address must be within a configured Calico IP pool and not currently in use.
https://docs.projectcalico.org/networking/use-specific-ip
When you created Deployment with one replica and defined hostIP and hostPort
you basically bounded hostIP and hostPort of your host machine with your pod IP and container port, so that traffic is routed from hostIP: port to podIP: port.
Created pod (and container inside of it ) was assigned the ip address from the IP range that is available to it. Basically, the IP range depends on the CNI networking plugin used and how it allocates IP range to each node. For instance flannel, by default, provides a /24 subnet to hosts, from which Docker daemon allocates IPs to containers. So hostIP: 172.17.0.1 option in a spec has nothing to do with assigning IP address to a pod.
Basically, the question is how to assign static IP to a pod so that
172.17.0.3 doesn't get assigned to it
As far as I know, all major networking plugins, provide a range of IPs to hosts, so that a pod's IP will be assigned from that range.
You can explore different networking plugins and look at how each of them deals with IPAM(IP Address Management), maybe some plugin provides that functionality or offers some tweaks to implement that, but overall its usefulness would be quite limited.
Below is useful info on "hostIP, hostPort" from official K8 docs:
Don’t specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each
combination must be unique. If you don’t specify the hostIP and
protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP
and TCP as the default protocol.
If you only need access to the port
for debugging purposes, you can use the apiserver proxy or kubectl
port-forward.
If you explicitly need to expose a Pod’s port on the
node, consider using a NodePort Service before resorting to hostPort.
Avoid using hostNetwork, for the same reasons as hostPort.
Orignal info link to config best practices.