kong : replicate data across multiple servers - postgresql

I am using a kong api gateway. How to use the same data across multiple kong server instances?
I am deploying kong on docker containers. I tried committing all the postgresql data and generating a new docker postgre db image from it. I thought reusing this image would solve this problem, but that did not help.

Use a database to store the configuration, all your kong nodes will get data from this single source.
You can use deck to automatize your deployment

Related

WSO2 APIM 3 docker persist API data

I installed WSO2 APIM 3 using docker-compose, and created some APIs. When I do a docker-compose down, and docker-compose up the API data is gone, how do I persist the data?
Thanks.
Basically you need to mount storages for the databases and API Manager nodes. For APIM nodes, you need to mount APIM/repository/deployment/server folder to a volume. You can find details in https://docs.wso2.com/display/AM260/Common+Runtime+and+Configuration+Artifacts

Two services for the same Pod on GKE

Question
Is it problematic to create two Services for the same pod, one for internal access and the other for external access?
Context
I have a simple app running on GKE.
There are two pods, each with one container:
flask-pod, which runs a containerized flask app
postgres-pod, which runs a containerized postgres DB
The flask app accesses the postgres DB through a ClusterIP Service around the postgres DB.
Concern
I also have connected a client app, TablePlus (running on my machine), to the postgres DB through a LoadBalancer Service. Now I have 2 separate services to access my postgres DB. Is this redundant, or can this cause problems?
Thanks for your help.
It is perfectly fine. If you look at StatefulSets, you define one headless service that is used for internal purpose and another service to allow access from clients.
This approach is absolutely valid, there is nothing wrong with it. You can create as many Services per Pod as you like.

Multiple pods using same database on kubernetes

I would like to know if it is possible for multiple pods in the same Kubernetes cluster to access a database which is configured using persistent volumes on a Google cloud persistent disk.
Currently I am building a microservices achitecture web app which has 3 node apis in different pods all accessing the same database. So how do I achieve this with kubernetes.
Kindly let me know if my architecture is right as well
You can certainly connect multiple node-based app pods to the same database. It is sometimes said that microservices shouldn't share a database but this depends on what your apps are doing, the project history and the extent to which you want the parts to be worked on separately.
There are questions you have to answer about running databases at scale, such as your future load and whether you want to use relational databases if you're going to try to span availability zones. And there are
some specific to kubernetes, especially around how you associate DB Pods to data. See https://stackoverflow.com/a/53980021/9705485. Another popular option is to use a managed DB service from a cloud provider. If you do run the DB in k8s then I'd suggest looking for a helm chart or looking at an operator, such as the kubeDB operator, to avoid crafting the kubernetes descriptors yourself and to get more guidance on running the DB and setting it up.
If it's a new project and you've not used k8s before then you'll also have to decide where to host your code, your docker images and your deployment descriptors and how to setup your CI pipelines. If you've not got answers to these questions already then I'd suggest looking at Jenkins-X as it will provide you with out of the box defaults for a whole cluster and CI setup and a template ('build pack') for building node apps and deploying them to staging and prod environments through a pipeline.

How do i run a HA MongoDB in my kubernetes cluster without Portworx?

I want to have a MongoDB deployment as a service to my database per service type microservice architecture model.
Right now I am using helm packages to deploy mongo db by defining persistent volume and persistent volume claims.
But I want to deploy mongodb as HA with storing data in any EBS or so!
When I checked online for this solution everything suggests it with Portworx. But is there a way to do it without using Portworx?
Any help appreciated.

GitLab HA with Kubernetes and Gluster

I currently have GitLab omnibus setup on Docker. I plan to have HA for the same by adding it to Kubernetes and have persistence using Gluster. I have played around configuring Kubernetes with Gluster. Now it's time to bring GitLab into Kubernetes. GitLab uses PostgreSQL as the default db.
My query is that to implement HA, should i
a) split GitLab into GitLab application and PostgreSQL container, and then run both (Application and DB) in their own cluster of pods i.e., separate deployments of replicas of GitLab app and PostgreSQL?
OR
b) keep using the omnibus installer and just have replicas of this single, standalone container?
Does it really make any difference whether
1) writes happen to a db cluster exposed via service to the GitLab app
OR
2) writes happening directly to the omnibus GitLab container (which has db within itself)
Just want to make sure that i don't unnecessarily end up making the setup complex. Having GitLab in Kubernetes along with Gluster already makes things a little complex. So does splitting app and db makes sense or just the omnibus setup will suffice? Concerned about concurrent writes to db.
According to http://docs.gitlab.com/ce/install/kubernetes/gitlab_omnibus.html#introduction you should use dedicated Redis and PostgreSQL HA clusters. Option b) and 1)
For less downtime better to use PostgreSQL master-slave cluster (https://www.postgresql.org/docs/10/static/different-replication-solutions.html) and Redis Cluster master-slave (https://redis.io/topics/cluster-tutorial). "Note that the minimal (Redis) cluster that works as expected requires to contain at least three master nodes".
If you will use only GlusterFS to bring failover to PostgreSQL, you can get some errors requires manual repair when one DB instance crashes and another brings up. Like this: How do I fix Postgres so it will start after an abrupt shutdown?