Tarantool master-master replication - nosql

I had cluster with two nodes in replica set (master and replica).
How should I configure replication master-master?
I tried change flag read_only on replica, but it isn't work.

You should use all_rw replicaset parameter; it is available via both UI and API.

Related

Deploy Mongo Database with a single master and two read replicas in the Kubernetes cluster of at least 3 worker nodes

Deploy Mongo Database with a single master and two read replicas in the Kubernetes cluster of at least 3 worker nodes that are available in different availability zones.
Points to keep in mind while deploying the DB:
All replicas of DB should be deployed in a separate worker node of diff Availability Zone(For high availability).
Autoscale the read replicas if needed.
Data should be persistent.
Try to run the containers in nonprivileged mode if possible.
Use best practices as much as you can.
Push the task into a separate branch with a proper README file.

Use AWS RDS Postgres Replicas as a single cluster with 1 endpoint

RDS Postgres Replicas can scale up to 5 replicas. But when I create a replica, it creates it as a single instance, not as a cluster.
If I want to use RDS Postgres Read Replica clusters so that my single application can handle high TPS and the TPS can be shared by multiple RDS Replicas.
In know this is possible with Aurora replicas, as Aurora creates a cluster of replicas which has single endpoint and which can scale in or scale out. But All normal RDS
Postgres Replicas are created like single instances with different endpoints.
Is it possible to make RDS postgres replicas as a cluster with 1 endpoint?
Clusters are for Aurora, not for RDS. So you have to make sure you choose Aurora when you try to create your Database in AWS Console:
#Marin is correct.
RDS does not provide auto load balancing between running reader instances.
You have to manage load balancing between replica instances yourself.
In Aurora, there is auto load balancing as well as auto scaling amongst different reader instances.

Why we need more than 3 master cluster for kubernetes HA

I see most of the K8S master components has a leader selection process except apiServer. If only one node will be the leader any point of time, why would we need more then 3 master cluster for bigger k8s cluster?
The requirement of minimum 3 hosts comes from the fact that Kubernetes HA cluster uses etcd for storing and syncing configuration. etcd requires minimum 3 nodes to ensure HA. In general case we need to use n+1 model when want to deploy Kubernetes HA cluster
In a single master setup, the master node manages the etcd database, API server, controller manager and scheduler, along with the worker nodes. However, if that single master node fails, all the worker node fail as well and entire cluster will be lost.
In a multi-master setup, by contrast, multi-master provides high availability for a single cluster and improves network performance because all the masters behave like a unified data center.
A multi-master setup protects against a wide range of failure modes, from a loss of single worker node to the failure of the master node’s etcd service. By providing redundancy, a multi-master cluster serves a highly available system for your end users.
Do not use a cluster with two master replicas. Consensus on a two-replica cluster requires both replicas running when changing persistent state. As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state. A two-replica cluster is thus inferior, in terms of HA, to a single replica cluster.
Here are useful documentation: kubernetes-ha-cluster, creating-ha-cluster.
Articles: ha-cluster, ha.

How do i configure a Kubernetes Replication Controller to ensure there is a replica on each worker node/minion?

Is there a way to configure an RC such that i have a single replica on each of my worker nodes?
I just created a x2 replica RC for elasticsearch and it has placed both instances onto just one of my worker nodes. I would prefer to have one instance on each of my worker nodes.
This is particularity important for an application like elasticsearch that would use persistent storage on the docker host - having two elasticsearch instances using the same datastore would likely cause issues.
How is this possible to achieve?
Environment:
1x Kubernetes master - physical server running CoreOS
2x Kubernetes nodes - physical servers running CoreOS
You can't choose nodes directly for pods created by scaling up a replication controller. The scheduler assigns nodes based on constraints. You can artificially prevent pods from going to the same node by making them use a resource a node only has one of, like a hostPort.
The daemon controller proposal (https://github.com/kubernetes/kubernetes/pull/13368) sounds more like what you want, which would let you spread pods across nodes

Adding New Node to an Existing Mongo Replica with Authentication

I've an existing replica set using simple authentication configured and I want to add a new node to the replica.
I now use the following flow to do so:
Start the new node as a standalone with authentication enabled and the same keyfile as all other nodes
create an admin user locally (with the exact same credentials as the nodes in the replica have)
Stop the node
Start it again with the replica settings
Add it to the replica
But this feels awkward - having to recreate the user(s) for each node, is there no other way? to just specify the keyfile and connect the new node to the replica?
Thanks