Postgres 2 node cluster setup - postgresql

I want to setup 2 node cluster for postgres. How to configure primary and backup for a 2 node cluster I need to know the best way to do it also its necessary configurations.

Related

Use AWS RDS Postgres Replicas as a single cluster with 1 endpoint

RDS Postgres Replicas can scale up to 5 replicas. But when I create a replica, it creates it as a single instance, not as a cluster.
If I want to use RDS Postgres Read Replica clusters so that my single application can handle high TPS and the TPS can be shared by multiple RDS Replicas.
In know this is possible with Aurora replicas, as Aurora creates a cluster of replicas which has single endpoint and which can scale in or scale out. But All normal RDS
Postgres Replicas are created like single instances with different endpoints.
Is it possible to make RDS postgres replicas as a cluster with 1 endpoint?
Clusters are for Aurora, not for RDS. So you have to make sure you choose Aurora when you try to create your Database in AWS Console:
#Marin is correct.
RDS does not provide auto load balancing between running reader instances.
You have to manage load balancing between replica instances yourself.
In Aurora, there is auto load balancing as well as auto scaling amongst different reader instances.

How to add remote vm instance as worker node in kubernetes cluster

I'm new to kubernetes and trying to explore the new things in it. So, my question is
Suppose I have existing kubernetes cluster with 1 master node and 1 worker node. Consider this setup is on AWS, now I have 1 more VM instance available on Oracle Cloud Platform and I want to configure that VM as worker node and attach that worker node to existing cluster.
So, is it possible to do so? Can anybody have any suggestions regarding this.
I would instead divide your clusters up based on region (unless you have a good VPN between your oracle and AWS infrastructure)
You can then run applications across clusters. If you absolutely must have one cluster that is geographically separated, I would create a master (etcd host) in each region that you have a worker node in.
Worker Node and Master Nodes communication is very critical for Kubernetes cluster. Adding nodes from on-prem to a cloud provider or from different cloud provider will make lots of issues from network perspective.
As VPN connection between AWS and Oracle Cloud needed and every time worker node has to cross ocean (probably) to reach master node.
EDIT: From Kubernetes Doc, Clusters cannot span clouds or regions (this functionality will require full federation support).
https://kubernetes.io/docs/setup/best-practices/multiple-zones/

Kubernetes Citus setup with individual hostname/ip

I am in the process of learning Kubernetes with a view to setting up a simple cluster with Citus DB and I'm having a little trouble with getting things going, so would be grateful for any help.
I have a docker image containing my base debian image configured for Citus for the project, and I want to set it up at this point with one master, that should mount a GCP master disk with a Postgres DB that I'll then distribute among the other containers, each mounted with a individual separate disk with empty tables (configured with the Citus extension) to hold what gets distributed to each. I'd like to automate this further at some point, but now I'm aiming for just a master container, and eight nodes. My plan is to create a deployment that opens port 5432 and 80 on each node, and I thought that I can create two pods, one to hold the master and one to hold the eight nodes. Ideally I'd want to mount all the disks and then run a post-mount script on the master that will find all the node containers (by IP or hostname??), add them as Citus nodes, then run create_distributed_table to distribute the data.
My confusion at present is about how to label all the individual nodes so they will keep their internal address or hostname and so in the case of one going down it will be replaced and resume with the data on the PD. I've read about ConfigMaps and setting hostname aliases but I'm still unclear about how to proceed. Is this possible, or is this the wrong way to approach this kind of setup?
You are looking for a StatefulSet. That lets you have a known number of pod replicas; with attached storage (PersistentVolumes); and consistent DNS names. In the pod spec I would launch only a single copy of the server and use the StatefulSet's replica count to control the number of "nodes" (also a Kubernetes term), if the replica is #0 then it's the master.

Add/Remove nodes to Apache ZooKeeper cluster?

I have ZooKeeper cluster with 5 nodes and multiple apllications will connect to it keep the application context settings.
Due to some changes in the data center my network team want to replace 2 new VMs with existing 2 zookeeper vms. My questions are
Should I first add 2 new nodes to cluster?
Remove 2 exist nodes from cluster?
Do I need to restart all the connected applications with new cluster configuration?
If possible, please provide me order of action I should perform.
Appreciate your response.

how to set aws node and vagrant node when the master node in the local

the kubernetes 1.2 support muti-node acrossing multiple service providers , now the master node running in my laptop , I want to add two work node respectively in amazon and vagrant . how to achieve it?
the kubernetes 1.2 support muti-node acrossing multiple service providers
Where did you see this? It isn't actually true. In 1.2 we added support for nodes across multiple availability zones within the same region on the same service provider (e.g. us-central1-a and us-central1-b in the us-central1 region in GCP). But there is no support for running nodes across regions in the same service provider much less spanning a cluster across service providers.
now the master node running in my laptop , I want to add two work node respectively in amazon and vagrant
The worker nodes must be able to connect directly to the master node. I wouldn't suggest exposing your laptop to the internet directly so that it can be reached from an Amazon data center, but would instead advise you to run the master node in the cloud.
Also note that if you are running nodes in the same cluster across multiple environments (AWS, GCP, Vagrant, bare metal, etc) then you are going to have a difficult time getting networking configured properly so that all pods can reach each other.