REDIS cluster without persistence on KUBERNETES - kubernetes

I am trying to setup a redis cluster without persistence on a kubernetes cluster. Is there a way I can do that without persistence volume. I need auto recovery after pod reboot. is there an easy way to do that ?
Tried out updating node info with a script on startup which doesn't really work as the rebooted pod comes up with a new static private ip.
fyi i have created a stateful set and a configmap referred here: https://github.com/rustudorcalin/deploying-redis-cluster
and the empty dir setup for volumes.
ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/

You cannot do this, the state of your redis is lots when a pod is restarted. Even with persistence storage is not so easy. You will need some kind of orchestration to manage and reconnect Redis.

Do you mean actual cluster mode or just running Redis in general without persistence? This is what I usually use.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ...
namespace: ...
labels:
app.kubernetes.io/name: redis
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
spec:
containers:
- name: default
image: redis:latest
imagePullPolicy: Always
ports:
- containerPort: 6379
args:
- "--save"
- ""
- "--appendonly"
- "no"

Related

PVCs and multiple namespaces

I have a small Saas App that customers can sign up for, and they get their own instance completely separated from the rest of the clients. It's a simple REST API, with a DB (postgres) and caddy that gets deployed using a docker-compose.
This works fine, but requires me to create the VPS, deploy the different services, and essentially is really hard to manage as I most of the work is manual.
I have decided to use kubernetes, and I have gotten to the point where I can create a separate instance of the system on it's own, isolated namespace for each client, fully automated. This creates the different deployments, services, and pods. Also, I create a PVC for each namespace/client.
The issue has to do with Persistent Volume Claims, and how they work in namespaces. As I want to keep the data completely separate from other instances, I wanted to create a PVC for each client, so that only the DB from that client can access it (and the server, as it requires some data to be written to disk).
This works fine in minikube, but the issue comes with the hosting provider. I use DigitalOcean's managed cluster and they do not allow multiple PVCs to be created, therefore making it impossible to achieve the level of isolation that I want. They allow you to mount a block storage (whatever size you need), and then use it. This would mean that the data is all stored on the "same disk", and all namespaces can access it.
My question is: Is there a way to achieve the same level of isolation, i.e. separate the mount points for each of the DB instances, in such a way that I can still achieve (or at least get close) to the level of separation that I require? The idea would be something along the lines of:
/pvc-root
/client1
/server
/db
/client2
/server
/db
...
This is what I have for now:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: database-claim
name: database-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: database
name: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
io.kompose.service: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: database
name: database
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: database
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.service: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: db_name
- name: POSTGRES_PASSWORD
value: db_password
- name: POSTGRES_USER
value: db_user
image: postgres:10
imagePullPolicy: ""
name: postgres
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: database-claim
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: database-claim
persistentVolumeClaim:
claimName: database-claim
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: server
name: server
spec:
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: server
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
name: server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: server
template:
metadata:
labels:
io.kompose.service: server
spec:
containers:
- env:
- name: DB_HOST
value: database
- name: DB_NAME
value: db_name
- name: DB_PASSWORD
value: db_password
- name: DB_PORT
value: "5432"
- name: DB_USERNAME
value: db_user
image: <REDACTED>
name: server-image
ports:
- containerPort: 8080
restartPolicy: Always
volumes: null
EDIT Feb 02, 2021
I have been in contact with DO's customer support and they clarified a few things:
You need to manually attach a volume to the cluster, so the PVC deployment file is ignored. The volume is then mounted and available to the cluster but NOT in a ReadWriteMany config, which could have served this case quite well
They provide an API, so in theory I could create the volume (for each client) programmatically and then attach a volume for a specific client, keeping a ReadWriteOnce
This of course locks me in to them as a vendor, and makes things a bit harder to configure and migrate
I am still looking for suggestions whether this is the correct approach for my case. If you have a better way let me know!
in theory this should be achievabe
Is there a way to achieve the same level of isolation, i.e. separate the mount points for each of the DB instances, in such a way that I can still achieve (or at least get close) to the level of separation that I require?
Don't run a production database with a single volume. You want to run a database with some form of replication, in case a volume or node crash.
Either run the database in a distributed setup e.g. using Crunchy PostgreSQL for Kubernetes or use a managed database e.g. DigitalOcean Managed Database
Within that DBMS, create logical databases or schemas for each customer - if you really need that strong isolation. Hint: it is probably easier to maintain with less isolation e.g. using multi-tenancy within the tables.
Late to the party, but here's a solution that may bode well for your use case - if you still haven't found a solution.
Create a highly available NFS/CEPH server then export that in a way that your pods can attach to it. Then create PV's and PVC's as you'd like and bypass all that DO blockage.
You have an application very similar to what you describe I support and I went with a highly available NFS server using DRBD, Corosync, and Pacemaker and it all works as expected. No issues so far.

Problems to communicate kubernetes pod with external endpoints (rest services, sql server, kafka, redis etc...)

I have a kubernetes cluster of one node. I have java services dockerized that access to rest services, sql server, kafka and another endpoints outside kubernetes cluster but in the same google cloud network.
The main reason cause I ask for help is that I can't connect the java services dockerized inside the pod to before mentioned external endpoints.
I've try before with flannel network but now I've reset the cluster and I've installed calico network without positive results.
Pods of the custer running by default:
Cluster nodes:
I deploy some java services dockerized as cronjobs, others as deployments. To comunicate this cronjobs or deployments with external endpoints like Kafka, Sql Server, etc I use services.
An example of each of them:
Cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
cronjob1: cronjob-name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
restartPolicy: OnFailure
selector:
matchLabels:
cronjob1: cronjob-name
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
deployment1: deployment_name
name: deployment_name
spec:
replicas: 1
selector:
matchLabels:
deployment1: deployment_name
strategy: {}
template:
metadata:
labels:
deployment1: deployment_name
spec:
containers:
- image: repository/repository-name:service-name:version
imagePullPolicy: ""
name: service-name
resources: {}
imagePullSecrets:
- name: dockerhub
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
Service:
apiVersion: v1
kind: Service
metadata:
name: sqlserver
spec:
type: ClusterIP
selector:
cronjob1: cronjob1
deployment1: deployment1
ports:
- protocol: TCP
port: 1433
targetPort: 1433
My problem is that from java services I can't connect, for example, with Sql Server Instance. I've verified DNS and calico pods logs and there weren't errors. I've try to connect by ssh to pods while it's running and from pod inside I can't do telnet to Sql Server instance.
¿Could you give me some idea about the problem is? or ¿what test could I do?
¡Thank you very much!
I resolved the problem configuring Kubernetes cluster again but with calico instead of fannel.Thanks for the replies. I hope this help anyone else.

Openshift: Is it possible to make different pods of the same deployment to use different resources?

In Openshift, say there are two pods of the same deployment in Test env. Is it possible to make one pod to use/connect to database1, make another pod to use/connect to dababase2 via label or configuration?
I have created two diff pods with same code base or image containing same compiled code. Using spring profiling,passed two different arguments for connection to oracle database.
for example
How about try to use StatefulSet for deploying each pod ? StatefulSet make each pod uses each PersistentVolume, so if you place each configuration file which is configured with other database connection data on each PersistentVolume, each pod can use other database each other. Because the pod can refer different config file.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app
spec:
serviceName: "app"
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: example.com/app:1.0
ports:
- containerPort: 8080
name: web
volumeMounts:
- name: databaseconfig
mountPath: /usr/local/databaseconfig
volumeClaimTemplates:
- metadata:
name: databaseconfig
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Mi

How to get de pods on a running state

I'm trying to set up Cassandra on a Kubernetes cluster made of three virtual machines using two different files (Deployment and Service). In order to do this I use the command
kubectl create -f file.yaml
The service file works perfectly but when I start the other one with three replicas, the state of the pods is CrashLoopBackOff instead of running.
The configuration of the deployment file is the following
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: cassandra
labels:
app: cassandra
spec:
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google_containers/cassandra:v5
ports:
- containerPort: 9042
And this is the service file
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
I appreciate any help on this.
You shouldnt be using Deployment for running stateful applications. StatefulSets are recommended for running databases like cassandra.
follow the below link for reference --> https://kubernetes.io/docs/tutorials/stateful-application/cassandra/

Grafana is not working on kubernetes cluster while using k8s Service

I am trying to setup a very simple monitoring cluster for my k8s cluster. I have successfully created prometheus pod and is running fine.
When I tried to create grafana pod the same way, its not accessible through the node port.
My Grafana deploy file is-
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana-deployment
namespace: monitoring
spec:
replicas: 1
template:
metadata:
labels:
app: grafana-server
spec:
containers:
- name: grafana
image: grafana/grafana:5.1.0
ports:
- containerPort: 3000
And Service File is --
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: monitoring
spec:
selector:
app: grafana-server
type: NodePort
ports:
- port: 3000
targetPort: 3000
Note- When I am creating a simple docker container on the same host using same image, its working fine.
I have come to know that my servers provider had not enabled these ports (like grafana-3000, kibana-5601). Never thought of this since i am using these servers from quite a long time and never faced such blocker. They implemented these rules recently.
Well, after some port approvals, I tried the same config again and it worked like a charm.