How to deploy Microservices with MongoDB dependency on Kubernetes? - mongodb

I have 2 API backends Microservices in which each Microservice has MongoDB database. I want to deploy 2 or more instances of each of these in Kubernetes cluster on Cloud provider such as AWS.
Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod? Or what is the best practice for this use case?
If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice? What is the best practice for this use case?
Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?

Each instance of Microservice runs as a container in a Pod. Is it possible to deploy MongoDB as another container in the same Pod?
Nope, then your data would be gone when you upgrade your application.
Or what is the best practice for this use case?
Databases in a modern production environment are run as clusters - for availability reasons. It is best practice to either use a managed service for the database or run it as a cluster with e.g. 3 instances on different nodes.
If two or more instances of the same Microservice are running in different Pods, do I need to deploy 2 or more instances of MongoDB or single MongoDb is referenced by the multiple instances of the same Microservice?
All your instances of the microservice should access the same database cluster, otherwise they would see different data.
Each Microservice is Spring Boot application. Do I need to do anything special in the Spring Boot application source code just because it will be run as Microservice as opposed to traditional Spring Boot application?
Spring Boot is designed according to The Twelve Factor App and hence is designed to be run in a cloud environment like e.g. Kubernetes.

Deploy mongodb pod as a statefulset. refer below helm chart to deploy mongodb.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb
Using init container you can check inside springboot pod to check if the mongodb is up and running then start the springboot container. Follow the below link for help
How can we create service dependencies using kubernetes

Related

Guidance for deploying Redis Cache for Kubernetes Microservices

I am looking to implement Redis Cache for a number of web applications in Kubernetes, but am not sure how exactly to architect the Redis Cache part.
I was thinking that if I have 5 replicas of my application, they could all use a single Redis Cache in a separate pod, as I wanted to avoid using a sidecar container for each application pod. Then for each application, they have their own Redis Cache Deployment in Kubernetes, and the application connects to this (by a service I guess).
Does this sound like a suitable plan?
How does the application talk to the Redis Cache pod, do I need to expose it via a Service?
I've seen that you should co-locate your Redis Cache and Application on the same node, is this a concern, and is there a way to do this?
With helm you can easily install redis on your cluster using the bitnami chart.
Or i prefer to install a redis operator an let it do the magic.
Either way, you can install one or multiple redis on your kubernetes cluster and they will be accessible through a kubernetes service at something like http://my-redis-service.cool-namespace.svc.cluster.local:6379.
There is no need to co-host redis on the same node, that's where kubernetes does the work.

Migrate Service Fabric Reliable Collections to Kubernetes

We are in the process of migrating our Service Fabric services to Kubernetes. Most of them were "stateless" services and were easy to migrate. However, we have one "stateful" service that uses SF's Reliable Collections pretty heavily.
K8s has Statefulsets, but that's not really comparable to SF's reliable collections.
Is there a .NET library or other solution to implement something similar to SF's Reliable Collections in K8s?
AFAIK this cannot be done by using a .Net library.
K8 is all about orchestration. SF on the other hand is both an orchestrator + rich programming /application model + state management.
If you want to do something like reliable collection in K8 then you have to either
A) built your own replication solution with leader election and all.
B) use private etcd/cockroachdb etc. store
This article is pretty good in terms of differences.
https://learn.microsoft.com/en-us/archive/blogs/azuredev/service-fabric-and-kubernetes-comparison-part-1-distributed-systems-architecture#split-brain-and-other-stateful-disasters
"Existing systems provide varying levels of support for microservices, the most prominent being Nirmata, Akka, Bluemix, Kubernetes, Mesos, and AWS Lambda [there’s a mixed bag!!]. SF is more powerful: it is the only data-ware orchestration system today for stateful microservices"
However, they don't solve the coordination problem (saving data on the primary instance will automatically replicate to the others for recovery when the primary instance dies). That's what SF reliable collections does out of the box.
StatefulSets are valuable for applications that require one or more of the following.
Stable (persistent), unique network identifiers.
Stable, persistent storage.
Ordered, graceful deployment and scaling.
Ordered, automated rolling updates.
If your application doesn't require any of these, you should deploy your application using a Deployment.
There is a good Kubernetes guide on how to Run a Replicated Stateful Application here.
This page shows how to run a replicated stateful application using a StatefulSet controller. This application is a replicated MySQL database. The example topology has a single primary server and multiple replicas, using asynchronous row-based replication.
The StatefulSet controller starts Pods one at a time, in order by their ordinal index. It waits until each Pod reports being Ready before starting the next one. And it is used to perform orderly startup of MySQL replication by running Init Containers.
Operators can help
While operators are not necessary, they can help run stateful apps on Kubernetes with features like application-level HA management, backups, and restore.
You can use existing Operators or develop your own. The operator package includes all the configurations needed to deploy and manage the application from a Kubernetes point of view – from a StatefulSet to be used to any required storage, rollout strategies, persistence and affinity configuration, and more. Kubernetes will then rely on the operator to validate instances of the application against the specification to ensure it runs in the same way across instances in all clusters it is deployed in.
Some DB operators:
You can deploy MySQL database using Kubernetes operator developed by Oracle (currently is in a preview state):
https://github.com/mysql/mysql-operator
There’s also a PostgreSQL operator by Crunchydata to deploy PostgreSQL to Kubernetes:
https://github.com/CrunchyData/postgres-operator
MongoDB owns an operator to deploy MongoDB Enterprise to a Kubernetes cluster:
https://github.com/mongodb/mongodb-enterprise-kubernetes
You can find ready-made operators on OperatorHub.io to suit your use case.

Best practices to organize pods in kubernates

I have a project that uses the following resources to work:
A jsf application running under JBoss and using a PostgreSQL
2 spring boot API using MongoDB.
So, I have the following dockers:
jsf+JBoss in same docker
PostgreSQL docker
mongo docker
one docker for each spring boot app.
In kubernates I need organize this containers in PODs, so my ideia is create the following:
A POD for jsf+JBoss docker
Another for PostgreSQL
Another POD for MongoDB
Only one POD for both spring boot app, because they need each other.
So, I have 4 POD and 6 containers. Thinking about best practices to use ks8, this is a good way to organize my project?
tl;dr; This doesn't follow Kubernetes best practices. Each application should be a separate deployment or StatefulSet.
A better way to run this in Kubernetes would be using a deployment or StatefulSet for each individual application, so it would be:
One Deployment with a single container for jsf+JBoss
One StatefulSet for PostgreSQL (though I would suggest looking at an Operator to manage your PostgreSQL cluster, i.e. kubedb
One StatefulSet for MongoDB (again, strongly suggest using an Operator to manage your MongoDB cluster, which kubedb can also handle)
One deployment each for your Spring Boot applications, assuming they communicate with each other via a network. You can then manage and scale each independently of each other, regardless of their dependency on each other.

Feasibility of using multi master Kubernetes cluster architecture

I am trying to implement CI/CD pipeline using Kubernetes and Jenkins. In my application I have 25 Micro services. And need to deploy it for 5 different clients. The microservice code is unique. But configuration for each client is different.
So here I am configuring Spring cloud config server with 5 different Profiles/Configuration. And When I am building Docker images, I will define which is the active config server profile by adding active profile in Docker file. So from 25 microservices I am building 25 * 5 number of Docker images and deploying that. So total 125 microservices I need to deploy in Kubernetes cluster. And these microservice are calling from my Angular 2 front end application.
Here when I am considering the performance of application and speed of response, the single master is enough of this application architecture? Or Should I definitely need to use the multi master Kubernetes cluster? How I can manage this application?
I am new to these cloud and CI/CD pipeline architecture tasks. So I have confusion related with designing of workflow. If single master is enough, then I can continue with current. Otherwise I need to implement the multi master Kubernetes HA cluster.
The performance of the application and/or the speed do not depend on the number of master nodes. It resolves High Availability issues, but not performance. Now, you should still consider having at least 3 masters for this implementation you are working on. If the master goes down, your cluster is useless.
In Kubernetes, the master gets the API calls and acts upon them, by setting the desired state of the cluster to the current state. But in the end that's the nodes (slaves) doing the heavy work. So your performance issues will depend mostly, if not exclusively, on your nodes. If you have enough memory and CPU, you should be fine.
Multi master sounds like a good idea for HA.
You could also look at using Helm which lets you configure microservices in a per installation basis so that you don't have to keep re-releasing docker images each time to configure a new environment. You can then inject the helm configuration into, say, a ConfigMap that mounts the content as an application.yml so that Spring Boot automatically loads the settings

one service fabric cluster or multiple cluster?

I am migrating several of my cloud service web/worker roles into service fabric.
There will be many (around 5+) service fabric services (stateless or stateful). Shall we put all of them into one service fabric cluster, or multiple clusters? Is there best practice on cluster plan?
Also, I will add multi-tenant support on my service. per this post Service Fabric multi-tenant, I can choose application instance per customer pattern.
I am wondering if it is good idea to choose cluster per customer pattern?
It depends on your requirements per-tenant, but generally it is better to have a single cluster with multiple applications and services:
A single cluster is much easier to manage than multiple clusters.
Service Fabric was designed to host and manage a large number of applications and services in a single cluster.
Multiple services in a single cluster allows you to utilize your cluster resources much more efficiently and use Service Fabric's resource balancing to manage resources effectively.
Standing up a new cluster, depending on size, can take 30 minutes or more. Creating application instances in a cluster takes seconds.