Guidance for deploying Redis Cache for Kubernetes Microservices - kubernetes

I am looking to implement Redis Cache for a number of web applications in Kubernetes, but am not sure how exactly to architect the Redis Cache part.
I was thinking that if I have 5 replicas of my application, they could all use a single Redis Cache in a separate pod, as I wanted to avoid using a sidecar container for each application pod. Then for each application, they have their own Redis Cache Deployment in Kubernetes, and the application connects to this (by a service I guess).
Does this sound like a suitable plan?
How does the application talk to the Redis Cache pod, do I need to expose it via a Service?
I've seen that you should co-locate your Redis Cache and Application on the same node, is this a concern, and is there a way to do this?

With helm you can easily install redis on your cluster using the bitnami chart.
Or i prefer to install a redis operator an let it do the magic.
Either way, you can install one or multiple redis on your kubernetes cluster and they will be accessible through a kubernetes service at something like http://my-redis-service.cool-namespace.svc.cluster.local:6379.
There is no need to co-host redis on the same node, that's where kubernetes does the work.

Related

Add on-premise CockroachDB node to a cluster hosted in Kubernetes

I'm planning to deploy a small Kubernetes cluster (3x 32GB Nodes). I'm not experienced with K8S and I need to come up with some kind of resilient SQL database setup and CockroachDB seems like a great choice.
I wonder if it's possible to relatively easy deploy a configuration, where some CockroachDB instances (nodes?) are living inside the K8S cluster, but at the same time some other instances live outside the K8S cluster (2 on-premise VMs). All those CockroachDB would need to be considered a single CockroachDB cluster. It might be also worth noting that Kubernetes would be hosted in the cloud (eg. Linode).
By relatively easy I mean:
simplish to deploy
requiring little maintenance
Yes, it's straight forward to do a multi-cloud deployment of CRDB. This is one of the great advantages of cockroachdb. Simply run the cockroach start command on each of the VMs/pods running cockroachdb and they will form a cluster.
See this blog post/tutorial for more info: https://www.cockroachlabs.com/blog/multi-cloud-deployment/

Correct Pod Architecture while using NFS?

I am using AWS EKS to run Kubernetes Cluster.
In that, I am using AWS EFS for persistent storage to store application logs. I have many applications running and I create a PVC for each application. I need persistent storage for the application logs only. Now, I need these logs in Elasticsearch also. So, I use Filebeat to do so. So, this is my architecture,
I just want to get feedback on this architecture. Is this the correct way to do this? What can be the drawbacks of this? How are you sending application logs to ELK in kubernetes?

Best practices to organize pods in kubernates

I have a project that uses the following resources to work:
A jsf application running under JBoss and using a PostgreSQL
2 spring boot API using MongoDB.
So, I have the following dockers:
jsf+JBoss in same docker
PostgreSQL docker
mongo docker
one docker for each spring boot app.
In kubernates I need organize this containers in PODs, so my ideia is create the following:
A POD for jsf+JBoss docker
Another for PostgreSQL
Another POD for MongoDB
Only one POD for both spring boot app, because they need each other.
So, I have 4 POD and 6 containers. Thinking about best practices to use ks8, this is a good way to organize my project?
tl;dr; This doesn't follow Kubernetes best practices. Each application should be a separate deployment or StatefulSet.
A better way to run this in Kubernetes would be using a deployment or StatefulSet for each individual application, so it would be:
One Deployment with a single container for jsf+JBoss
One StatefulSet for PostgreSQL (though I would suggest looking at an Operator to manage your PostgreSQL cluster, i.e. kubedb
One StatefulSet for MongoDB (again, strongly suggest using an Operator to manage your MongoDB cluster, which kubedb can also handle)
One deployment each for your Spring Boot applications, assuming they communicate with each other via a network. You can then manage and scale each independently of each other, regardless of their dependency on each other.

How do you monitor kubernetes nodes deployed using kops?

We have some Kubernetes clusters that have been deployed using kops in AWS.
We really like using the upstream/official images.
We have been wondering whether or not there was a good way to monitor the systems without installing software directly on the hosts? Are there docker containers that can extract the information from the host? I think that we are likely concerned with:
Disk space (this seems to be passed through to docker via df
Host CPU utilization
Host memory utilization
Is this host/node level information already available through heapster?
Not really a question about kops, but a question about operating Kubernetes. kops stops at the point of having a functional k8s cluster. You have networking, DNS, and nodes have joined the cluster. From there your world is your oyster.
There are many different options for monitoring with k8s. If you are a small team I usually recommend offloading monitoring and logging to a provider.
If you are a larger team or have more specific needs then you can look at such options as Prometheus and others. Poke around in the https://github.com/kubernetes/charts repository, as I know there is a Prometheus chart there.
As with any deployment of any form of infrastructure you are going to need Logging, Monitoring, and Metrics. Also, do not forget to monitor the monitoring ;)
I am using https://prometheus.io/, it goes naturally with kubernetes.
Kubernetes api already exposes a bunch of metrics in prometheus format,
https://github.com/kubernetes/ingress-nginx also exposes prometheus metrics (enable-vts-status: "true"), and you can also install https://github.com/prometheus/node_exporter as a daemonset to monitor CPU, disk, etc...
I install one prometheus inside the cluster to monitor internal metrics and one outside the cluster to monitor LBs and URLs.
Both send alerts to the same https://github.com/prometheus/alertmanager that MUST be outside the cluster.
It took me about a week to configure everything properly.
It was worth it.

Should we run a Consul container in every Pod?

We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).
Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.
We recently ran into some issues with using the Consul service discovery from within K8s. Right now our apps talk directly to the Consul Servers to register and unregister services they provide.
This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the local Consul agent. In our setup there are no local Consul agents.
My Question: Should we run local Consul agents as sidekick containers in each pod?
IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.
I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.
As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.
Consul: Connect Sidecar; https://www.consul.io/docs/platform/k8s/connect.html