Production Redis cluster with sharding in Kubernetes - kubernetes

I have tried Redis stable 'helm' chart to deploy a Redis cluster with 1 master and 3 slaves that replicates data written to master. But it is a single point of failure - I deleted the master, no new pod of master was recreated. Also, the chart does not support data partitions (sharding).
EDIT: I have created a Redis cluster using helm redis-ha chart, but there is no option to have sharding.
Aren't there Redis helm charts to deploy a production ready HA cluster that supports partitions (sharding)? Can you point me to resources I can use to setup a manageable Redis cluster? Primarily, my Redis is used for data caching, message processing & streaming.

You need Redis Sentinel.
If helm is an option, these links can be of help:
https://github.com/helm/charts/tree/master/stable/redis (notice the sentinel-related configuration parameters)
https://github.com/helm/charts/tree/master/stable/redis-ha

Here's a fine tutorial about setting up a 3 master / 3 slaves redis cluster with partitioning. It's targetted at Rancher, but that's optional, I just tested it on Azure Kubernetes Service and it works fine.
First, apply this yaml (ConfigMap, StatefulSet with 6 replicas, Service):
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
data:
update-node.sh: |
#!/bin/sh
REDIS_NODES="/data/nodes.conf"
sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES}
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0.1-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
And the next step is to run this script to form the cluster (you will have to enter "yes" once interactively):
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role; echo; done

Related

Access kubernetes job pods using hostname

I am trying to run a k8s job with 2 pods in which one pod will try to connect to other pod.
I cannot connect to other pod using hostname of the pod as suggested in the doc - https://kubernetes.io/docs/concepts/workloads/controllers/job/#completion-mode.
I have created a service and trying to access the pod as k8s-train-0.default.svc.cluster.local as mentioned in the document.
apiVersion: batch/v1
kind: Job
metadata:
name: k8s-train
spec:
parallelism: 2
completions: 2
completionMode: Indexed
manualSelector: true
selector:
matchLabels:
app.kubernetes.io/name: proxy
template:
metadata:
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: k8s-train
image: pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime
command: ["/bin/sh","-c"]
args:
- echo starting;
export MASTER_PORT=54321;
export MASTER_ADDR=k8s-train-0.trainsvc.default.svc.cluster.local;
export WORLD_SIZE=8;
pip install -r /data/requirements.txt;
export NCCL_DEBUG=INFO;
python /data/bert.py --strategy=ddp --num_nodes=2 --gpus=4 --max_epochs=3;
echo done;
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 54321
name: master-port
resources:
requests:
nvidia.com/gpu: 4
limits:
nvidia.com/gpu: 4
volumeMounts:
- mountPath: /data
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: efs-claim
restartPolicy: Never
backoffLimit: 0
---
apiVersion: v1
kind: Service
metadata:
name: trainsvc
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: master-svc-port
protocol: TCP
port: 54321
targetPort: master-port
clusterIP: None
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I am looking to establish communication between pod either using the hostname or to assign svc only to one pod slected with job-index.
Please let me know if i'm missing something here.
Thanks.

How to access mysql pod in another pod(busybox)?

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

How can I mount a docker config file with Skaffold?

I want to use the Prometheus image to deploy a container as part of the local deployment. Usually one has to run the container with volume and bind-mount to get the configuration file (prometheus.yml) into the container:
docker run \
-p 9090:9090 \
-v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
How can I achieve this with Kubernetes when using Skaffold?
Your Kubernetes configuration will be something like this,
You can specify the port number and volume mounts. The important sections for mounting are volumeMounts and volumes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-prom
spec:
selector:
matchLabels:
app: my-prom
template:
metadata:
labels:
app: my-prom
spec:
containers:
- name: my-prom
image: prometheus:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9090
volumeMounts:
- name: prom-config
mountPath: /path/to/prometheus.yml
volumes:
- name: prom-config
hostPath:
path: /etc/prometheus/prometheus.yml
---
apiVersion: v1
kind: Service
metadata:
name: my-prom
spec:
selector:
app: my-prom
ports:
- port: 9090
targetPort: 9090
save the Kubernetes config file in a folder and add below config to skaffold.yaml with the path to K8s config file,
deploy:
kubectl:
manifests:
- k8s/*.yaml

CrashLoopBackOff while increasing replicas count more than 1 on Azure AKS cluster for MongoDB image

Click here to get error screen
I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume & persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.
My Docker file to Create MongoDb image.
FROM ubuntu:16.04
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y mongodb-org
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]
YAML file for Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mongo
labels:
name: mongo
spec:
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: <my image of mongodb>
ports:
- containerPort: 27017
protocol: TCP
name: mongo
volumeMounts:
- mountPath: /data/db
name: az-files-mongo-storage
volumes:
- name: az-files-mongo-storage
persistentVolumeClaim:
claimName: mong-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
For your issue, you can take a look at another issue for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the StatefulSets and it's more recommended.
Update:
The yam file below will work for you:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: charlesacr.azurecr.io/mongodb:v1
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: az-files-mongo-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: az-files-mongo-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: az-files-mongo-storage
resources:
requests:
storage: 5Gi
And you need to create the StorageClass before you create the statefulSets. The yam file below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: az-files-mongo-storage
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
Then the pods work well and the screenshot below:
You can configure accessModes: - ReadWriteMany. But still, the volume or storage type should support this mode. Find the table here
According to that table, AzureFile supports ReadWriteMany but not AzureDisk.
you should be using StatefulSets for mongodb. deployments are for stateless services.

Cannot connect to a mongodb service in a Kubernetes cluster

I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore.
However, they can't seem to find the host.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
Here's my mongo deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
And an example of one of my services...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>
First, check that the service is created
kubectl describe svc mongo
You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via
kubectl get po | grep mongo
Which should return something like: mongo-deployment-<guid>-<guid>, then do
kubectl describe po mongo-deployment-<guid>-<guid>
You should make sure the pod is started correctly and says Running not something like ImagePullBackoff. It looks like you're mounting a volume from a gcePersistentDisk. If you're seeing your pod just hanging out in the ContainerCreating state it's very likely you're not mounting the disk correctly. Make sure you create the disk before you try and mount it as a volume.
If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:
kubectl logs mongo-deployment-<guid>-<guid>
If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
kubectl exec -it mongo-deployment-<guid>-<guid> sh
Which should get you into the container (Pod) and then you can try something like this to see if your DB is running.