Hello good morning everyone, I am using an AWS memcached cluster but found that replication between cluster servers is not synchronous.
server2xx.xxx.xxx.use1.cache.amazonaws.com:11211
get TEST_KEY_00XX
END
server1xx.xxx.xxx.use1.cache.amazonaws.com:11211
Connected to serverxxx.xxx.xxx.use1.cache.amazonaws.com.
Escape character is '^]'.
get TEST_KEY_00XX
VALUE TEST_KEY_00XX X XXXXX
x���M�Ǒ���gٛT~W�N�dY�>�$u1��Ѐ{���1D�S�D%+ga��Oxd
�K� �/��Yy�ʬ������˟����o�����%o9J�#���|�o��r~�?��OcY?��c=?������c{�O�|��۟>�o>�7_o���7��W��|<��3�
m ���
Is there any way to create a replication between all instances? No need to continue using the AWS service I can arm the infrastructure manually.
Regards,
Related
I'm really new into learning MongoDB and I'm having a hard time trying to understand sharding.
So I have my PC who is the host and I have created two VMs using VirtualBox. In my PC (host) there is a DB with some data. So my issue is which of those 3 components should be the the Config Server, the Shard and the Query Router (mongo). Can somebody please help me explaining this ? (I have read the documentation and still haven't understand it completely).
A sharded cluster needs to run at least three processes, a "decent" sharded cluster runs at least 10 processes (1 mongos Router, 3 Config servers, 2 x 3 Shard servers). In production, they run on different machines, but it is no problem to run them all on the same machine.
I would suggest these steps for learning:
Deploy a Stand alone MongoDB
Deploy a Replica Set, see Deploy a Replica Set or Deploy Replica Set With Keyfile Authentication
Deploy a Sharded Cluster, see Deploy a Sharded Cluster or Deploy Sharded Cluster with Keyfile Authentication
In the beginning it is an overkill to use different machines. Use localhost for all.
You can run multiple mongod/mongos instances on one machine, just ensure they are using different ports (e.g. the default port 27017), different dbPath and preferable also different log files.
You may run your first trial without authentication. Once you got that working, use authentication with keyfile and as next step use authentication by x.509 certificates.
If you use Windows, then have a look at https://github.com/Wernfried/mongoDB-oneclick
Assuming I have 2 postgres servers (1 master and 1 slave) and I'm using Patroni for high availability
1) I intend to have three-machine etcd cluster. Is it OK to use the 2 postgres machines also for etcd + another server, or it is preferable to use machines that are not used by Postgres?
2) What are my options of directing the read request to the slave and the write requests to the master without using pgpool?
Thanks!
yes, it is the best practice to run etcd on the two PostgreSQL machines.
the only safe way to do that is in your application. The application has to be taught to read from one database connection and write to another.
There is no safe way to distinguish a writing query from a non-writing one; consider
SELECT delete_some_rows();
The application also has to be aware that changes will not be visible immediately on the replica.
Streaming replication is of limited use when it comes to scaling...
We have recently setup streaming replication in our Postgres server (t01, t02).
t01 is master and t02 is the slave. I want to understand the below two issues:
Recently our /var directory of t01 server got full and app team was not able to access the application. My understanding was if t01 /var was full, the connection should be made to t02 and application should start using that as t02 /var was not full.
If we shutdown t01 server, will my application automatically use the t02 databases, Streaming replication will provide HA in this case or not?
No, PostgreSQL won't failover to the standby. Configuring failover properly is a hard problem, and you need specialized cluster software like Patroni to handle that.
As it is, you will have to fail over manually by running pg_ctl promote on the standby to do it.
You will also have to configure your clients to use the new server. To avoid that, you could use a virtual IP address that you can move to the standby, or you have to setup the clients to try both servers.
I am trying to setup a very simple cluster of 2 ejabberd nodes. However, while trying to go through the official ejabberd documentation and using the join_cluster argument that comes along with the ejabberdctl script, I always end up with a multi-master cluster where both the mnesia databases have replicated data.
Is it possible to set up a ejabberd cluster in master-slave mode? And if yes, then what I am I missing?
In my understanding, a slave get the data replicated but would simply not be active. The slave needs the data to be able to take over the task of the master at some point.
It seems to means that the core of the setup you describe is not about disabling replication but about not sending traffic to the slave, no ?
In that case, this is just a matter of configuring your load balancing mechanism to route the traffic accordingly to your preference..
Can a worker job in Heroku make socket (ex.pop3) connection to external server ?
I guess scaling worker process to 2 or more will run jobs in parallel and they all trying to connect to same server/port from a same client/port, am I right or missing something ?
Yes - Heroku workers can connect to the outside world - however, there is no built in provision for handling the sort of problems that you mention - you'd need to do that bit yourself.
Just look at the workers as a variety of separate EC2 instances.