Is there such possibility to have one of my replicaSet members in ATLAS and the others to be in my local machine? ( Kind of backup in ATLAS in case something go wrong with my local setup )
Not really. Atlas is a fully managed service. Having Altas manage some members of a replica set while you manage the others sounds like recipe for misconfiguration, miscommunication, and down time.
You could certainly have one replica set member in the cloud provider instance with the rest local.
Related
I have an Azure-managed PostgreSQL database.
I want to create a logical replica of it at GCP, (Google-managed, if possible).
At Azure, I've set the Azure replication support to Logical. However, this just seems to allow me to create replicas inside Azure. What I want is to create a replica in GCP.
If this was not Azure-managed, but self-managed, I would be able to create a tunnel from Azure to GCP and then do the WAL copy replication.
One might wonder: why? Because I don't want to be locked with one vendor.
If that cross-cloud replication is not possible, what's the easiest way to pull the entire database off (possibly not just the data with pgdump, but all its internals too).
While this question is Azure -> GCP, it seems other alternatives like GCP -> AWS or other vendors are also not supported. Or what am I missing?
Cross-Cloud Replication from Azure Source PostgreSQL to GCP destination CloudSQL through Conventional Native Logical Replication is possible and I've tested that it's working. I'm sure that it would work for self managed database too.
I am going to create a load balancer in Azure. I have a VM that already running and going to take a backup of the existing VM and will create another VM using that backup. So two servers will have the same configuration and will use the same credentials.
In the already existing server, I have MongoDB configured, and if I create the same VM that will also have the same configuration as the old VM. Now what I want to know is can I use the same MongoDB which will be accessed by two servers that have the same configurations?
Will it create any mess or any give any error?
can I use like above mentioned?
Do I need to configure another MongoDB for the second server?
can anyone please clarify my questions? it would be great to have some clear explanation. thank you
MongoDB has build in support for horizontal scalability and high availability meaning that you dont need to create a 3th party load balancer , the mongos service part of mongoDB sharding cluster is the load balancer itself. Check the official documentation for mongoDB replication and sharding ...
On your questions:
Will it create any mess or any give any error?
If you just copy data to another VM it will be fine , as far as you dont write to one of the VMs you can loadbalance reads between this independent VMs , but this is strange approch when you have build in mongoDB replication mechanism and you can just add the second VM as a SECONDARY member from replicaSet.
can I use like above mentioned?
Sure , you can use also this approach but why you will need to do it?
Do I need to configure another MongoDB for the second server?
Depends on the use case , but in general you would prefer to create 3x members replicaSet or if your database is large and write performance is strong requirement you may need to distribute the database between multiple servers ( shards ) so you will need more then just 3x servers ...
I have a project which has the following characteristics:
Local MongoDB replica set on an on-premise database
Cloud MongoDB instance in MongoDB atlas
On-premise MongoDB should keep in sync with MongoDB atlas
Local MongoDB instance may be offline several days
Once its online, it should start synchronizing with MongoDB atlas
Basically, I'm looking for something similar to Realm, except that this solution runs on an actual local server and not a mobile device.
I have looked into live migrations, see here. But this doesn't seem to fit this use-case entirely, as its intended for an eventual cutover, which I don't want.
Therefore, how can I achieve the following with MongoDB atlas? What am I missing?
Can I treat MongoDB atlas, as if its a part of my local replica set, and use the standard replication capability of MongoDB? I.e. Atlas will always be a secondary.
This functionality with native MongoDB Atlas not possible. You need to look for customised solution.
Can we replicate data from one RDS server to another? Or can we set master slave relationship between two RDS servers?
Should we replicate data from non RDS instance to RDS instance?
RDS can replicate from external mysql and also be a master of an external slave. It depends on your usecase if you "should" do it.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.External.Repl.html
While i guess you could setup replication between two RDS instances yourself I don't see why you should since starting a RDS read replica is just a few clicks in AWS console or an api call.
It can be possible to replicate data from RDS to RDS. It is also possible to replicate data from RDS to some other MySQL server.
Steps:
You can go creating your ec2 server and install MySQL.
Change configuration to replicate data.
That will require additional work to manage ec2 instance in case if your data is increasing and crossing the server limits
Then you have to do all the manual work again to replicate data as we can't increase storage in ec2 server.
RDS provides an easy mechanism to create Read replica via a few clicks. (Note: replica is quite a costlier option.)
But going with that you will save manual work one person salary who will be managing the database and doing these setups regularly.
If you are using postgresql database on RDS then you can use bucardo for asynchronous replication. You need to create a EC2 or use can use local system also but it will not be fast enough.
Use the following tutorial if you want to use bucardo.
https://www.installvirtual.com/how-to-install-bucardo-for-postgres-replication/
I think you can using snapshot to clone another rds database
I need the proper way of failover mechanism for mongodb on aws ec2. I know failover can be accomplished by replica sets, but what is the best way to fire a new mongo installed ubuntu-ec2 ami node and add it to replica set again automatically (with zero manual operation) and return the replica set to it's proper state ?
EBS has some problems, but if I use local instance storage, I will lost the dead nodes data, but does the replica got all the master data and so is replaca is enough to recover everthing (on mongo 1.8 with journaling), or do I have to use only EBS ?
How should I start mongo instances, If I should start with repair option, how can I sperate node's first run from failover restart ?
Regards,
The easiest way to bring up new nodes is to bring up a new node with a recent backup.
So now it's a question of how you do your backup and how you restore from the backup quickly.
The MongoDB site has a write up for backups (in general) and backups on EC2 specifically. There's also a write-up for adding a new set member.
You can do this with instance storage or EBS drives, but you'll need different strategies for each. There's really no single way to do this, so I would check out the docs I've linked to for a primer.
Highly recommend reading Sean Coates' article on mutli-node MongoDB Elections, failover and AWS - specifically, the subtlety on distributed arbiter nodes (e.g., make sure to give yourself a voting majority when an AZ goes down). A similar recommendation can be found in a comment on this (now-closed) MongoDB vs. Cassandra thread.