Best practices to organize pods in kubernates - kubernetes

I have a project that uses the following resources to work:
A jsf application running under JBoss and using a PostgreSQL
2 spring boot API using MongoDB.
So, I have the following dockers:
jsf+JBoss in same docker
PostgreSQL docker
mongo docker
one docker for each spring boot app.
In kubernates I need organize this containers in PODs, so my ideia is create the following:
A POD for jsf+JBoss docker
Another for PostgreSQL
Another POD for MongoDB
Only one POD for both spring boot app, because they need each other.
So, I have 4 POD and 6 containers. Thinking about best practices to use ks8, this is a good way to organize my project?

tl;dr; This doesn't follow Kubernetes best practices. Each application should be a separate deployment or StatefulSet.
A better way to run this in Kubernetes would be using a deployment or StatefulSet for each individual application, so it would be:
One Deployment with a single container for jsf+JBoss
One StatefulSet for PostgreSQL (though I would suggest looking at an Operator to manage your PostgreSQL cluster, i.e. kubedb
One StatefulSet for MongoDB (again, strongly suggest using an Operator to manage your MongoDB cluster, which kubedb can also handle)
One deployment each for your Spring Boot applications, assuming they communicate with each other via a network. You can then manage and scale each independently of each other, regardless of their dependency on each other.

Related

Guidance for deploying Redis Cache for Kubernetes Microservices

I am looking to implement Redis Cache for a number of web applications in Kubernetes, but am not sure how exactly to architect the Redis Cache part.
I was thinking that if I have 5 replicas of my application, they could all use a single Redis Cache in a separate pod, as I wanted to avoid using a sidecar container for each application pod. Then for each application, they have their own Redis Cache Deployment in Kubernetes, and the application connects to this (by a service I guess).
Does this sound like a suitable plan?
How does the application talk to the Redis Cache pod, do I need to expose it via a Service?
I've seen that you should co-locate your Redis Cache and Application on the same node, is this a concern, and is there a way to do this?
With helm you can easily install redis on your cluster using the bitnami chart.
Or i prefer to install a redis operator an let it do the magic.
Either way, you can install one or multiple redis on your kubernetes cluster and they will be accessible through a kubernetes service at something like http://my-redis-service.cool-namespace.svc.cluster.local:6379.
There is no need to co-host redis on the same node, that's where kubernetes does the work.

How to deploy Microservices with MongoDB dependency on Kubernetes?

I have 2 API backends Microservices in which each Microservice has MongoDB database. I want to deploy 2 or more instances of each of these in Kubernetes cluster on Cloud provider such as AWS.
Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod? Or what is the best practice for this use case?
If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice? What is the best practice for this use case?
Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?
Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod?
Nope, then your data would be gone when you upgrade your application.
Or what is the best practice for this use case?
Databases in a modern production environment are run as clusters - for availability reasons. It is best practice to either use a managed service for the database or run it as a cluster with e.g. 3 instances on different nodes.
If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice?
All your instances of the microservice should access the same database cluster, otherwise they would see different data.
Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?
Spring Boot is designed according to The Twelve Factor App and hence is designed to be run in a cloud environment like e.g. Kubernetes.
Deploy mongodb pod as a statefulset. refer below helm chart to deploy mongodb.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb
Using init container you can check inside springboot pod to check if the mongodb is up and running then start the springboot container. Follow the below link for help
How can we create service dependencies using kubernetes

Migrate Service Fabric Reliable Collections to Kubernetes

We are in the process of migrating our Service Fabric services to Kubernetes. Most of them were "stateless" services and were easy to migrate. However, we have one "stateful" service that uses SF's Reliable Collections pretty heavily.
K8s has Statefulsets, but that's not really comparable to SF's reliable collections.
Is there a .NET library or other solution to implement something similar to SF's Reliable Collections in K8s?
AFAIK this cannot be done by using a .Net library.
K8 is all about orchestration. SF on the other hand is both an orchestrator + rich programming /application model + state management.
If you want to do something like reliable collection in K8 then you have to either
A) built your own replication solution with leader election and all.
B) use private etcd/cockroachdb etc. store
This article is pretty good in terms of differences.
https://learn.microsoft.com/en-us/archive/blogs/azuredev/service-fabric-and-kubernetes-comparison-part-1-distributed-systems-architecture#split-brain-and-other-stateful-disasters
"Existing systems provide varying levels of support for microservices, the most prominent being Nirmata, Akka, Bluemix, Kubernetes, Mesos, and AWS Lambda [there’s a mixed bag!!]. SF is more powerful: it is the only data-ware orchestration system today for stateful microservices"
However, they don't solve the coordination problem (saving data on the primary instance will automatically replicate to the others for recovery when the primary instance dies). That's what SF reliable collections does out of the box.
StatefulSets are valuable for applications that require one or more of the following.
Stable (persistent), unique network identifiers.
Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, automated rolling updates.
If your application doesn't require any of these, you should deploy your application using a Deployment.
There is a good Kubernetes guide on how to Run a Replicated Stateful Application here.
This page shows how to run a replicated stateful application using a StatefulSet controller. This application is a replicated MySQL database. The example topology has a single primary server and multiple replicas, using asynchronous row-based replication.
The StatefulSet controller starts Pods one at a time, in order by their ordinal index. It waits until each Pod reports being Ready before starting the next one. And it is used to perform orderly startup of MySQL replication by running Init Containers.
Operators can help
While operators are not necessary, they can help run stateful apps on Kubernetes with features like application-level HA management, backups, and restore.
You can use existing Operators or develop your own. The operator package includes all the configurations needed to deploy and manage the application from a Kubernetes point of view – from a StatefulSet to be used to any required storage, rollout strategies, persistence and affinity configuration, and more. Kubernetes will then rely on the operator to validate instances of the application against the specification to ensure it runs in the same way across instances in all clusters it is deployed in.
Some DB operators:
You can deploy MySQL database using Kubernetes operator developed by Oracle (currently is in a preview state):
https://github.com/mysql/mysql-operator
There’s also a PostgreSQL operator by Crunchydata to deploy PostgreSQL to Kubernetes:
https://github.com/CrunchyData/postgres-operator
MongoDB owns an operator to deploy MongoDB Enterprise to a Kubernetes cluster:
https://github.com/mongodb/mongodb-enterprise-kubernetes
You can find ready-made operators on OperatorHub.io to suit your use case.

Difference between Kompose and compose-on-kubernetes

I am evaluating a migration of an application working with docker-compose to Kubernates and came across two solutions: Kompose and compose-on-kubernetes.
I'd like to know their differences in terms of functionality/ease of use to make decision of which one is more suited.
Both product provide a migration path from docker-compose to Kubernetes, but they do it in a slightly different way.
Compose on Kubernetes runs within your Kubernetes cluster and allows you to deploy your compose setup unchanged on the Kubernetes cluster.
Kompose translates your docker-compose files to a bunch of Kubernetes resources.
Compose is a good solution if you want to continue running using docker-compose in parallel to deploying on Kubernetes and so plan to keep the docker-compose format maintained.
If you're migrating completely to Kubernetes and don't plan to continue working with docker-compose, it's probably better to complete the migration using Kompose and use that as the starting point for maintaining the configuration directly as Kubernetes resources.

Multiple pods using same database on kubernetes

I would like to know if it is possible for multiple pods in the same Kubernetes cluster to access a database which is configured using persistent volumes on a Google cloud persistent disk.
Currently I am building a microservices achitecture web app which has 3 node apis in different pods all accessing the same database. So how do I achieve this with kubernetes.
Kindly let me know if my architecture is right as well
You can certainly connect multiple node-based app pods to the same database. It is sometimes said that microservices shouldn't share a database but this depends on what your apps are doing, the project history and the extent to which you want the parts to be worked on separately.
There are questions you have to answer about running databases at scale, such as your future load and whether you want to use relational databases if you're going to try to span availability zones. And there are
some specific to kubernetes, especially around how you associate DB Pods to data. See https://stackoverflow.com/a/53980021/9705485. Another popular option is to use a managed DB service from a cloud provider. If you do run the DB in k8s then I'd suggest looking for a helm chart or looking at an operator, such as the kubeDB operator, to avoid crafting the kubernetes descriptors yourself and to get more guidance on running the DB and setting it up.
If it's a new project and you've not used k8s before then you'll also have to decide where to host your code, your docker images and your deployment descriptors and how to setup your CI pipelines. If you've not got answers to these questions already then I'd suggest looking at Jenkins-X as it will provide you with out of the box defaults for a whole cluster and CI setup and a template ('build pack') for building node apps and deploying them to staging and prod environments through a pipeline.