MongoDB Master and Slave at same time - mongodb

I have a server that I want to use for testing new app verson (say staging server), but at same time I want to use it as replication slave for MongoDB. So, there is two roles:
always replicate an database to this server (only one database, original, with real data)
after deployment, make a copy of original db, to a new one (*-staging db), and test my deployment against this database
I see from docs how to replicate only specified database from one server to another, seems that it's working fine. But the problem that when i've tried to make a copy of existing database, on slave server, it fails with error not master. I don't want to make this database copy on master server, because it means that all staging tests will be executed against master server, that doesn't work for me.
Does it mean that I can't have MongoDB master for one database, and slave for another?

Slaves by default are read only but you can achieve what you are trying to do by making it master and slave at the same time by passing both --master and --slave when starting your server:
mongod --slave --source master:1234 --master

Related

PostgreSQL Logical Decoding Setup

This may be a very basic question, but I'm struggling with it. I'm attempting to set up logical decoding between 2 separate servers.
On the Master, I've gotten it setup so that the changes that I make to a table (INSERT, UPDATE, DELETE) are sent to my logical replication slot, and I can see the changes using the pg_logical_slot_get/peek_changes functions - all on my Master server.
On the Slave, I'm attempting to run the pg_revclogical command (using command prompt), however, I can't seem to get it to receive the changes made on the Master. I've realized that no where have I told the 2 to communicate to each other. I tried to define the host as the Master to tell the replication process that I have to pull the changes from the Master to the Slave.
I've made all of the required changes to the postgresql.conf (wal_level=logical; max_replication_slots=3; max_wal_senders=3) and pg_hba.conf (whatever it told me to do to fix errors) files on both Master and Slave. The pg_recvlogical command and the resulting error message are below. Could someone please help me get my Master and Slave communicating?
pg_recvlogical --start --slot=wendy_test --plugin=test_decoding --dbname=testdb --file=C:\Logical_Decoding_Test.log --username=dbaadmin --host=127.##.##.### --port=5432
When I execute this command on the Slave, I don't receive any error messages, but it doesn't return to a command prompt either? While it's doing its thing, I can check the database and see that the wendy_test slot is active, so it appears to be doing something. However, when I INSERT a row into the table on my Master, nothing happens on my Slave (even after 5 minutes).
I can add all of the code that I have used to create the replication slots, populate the queue, etc. I was trying to limit the amount of reading and figured those to be more basic. Also, my end game is to transition this to have our AWS RDS system as our Slave, so any recommendations for that is appreciated as well. THank you for your time.

Primary and standby server at different timelines in postgres

I am very new to postgres and being new I got stuck at a point and need some help, please pardon if you find it silly.
I am doing a pgpool HA and at postgres level i have streaming replication between 3 nodes of postgresql-9.5 - 1 master and 2 slaves
I was trying to configure auto failover but when i switched back to my original master, and restarted the postgres service, I am getting the following error:
slave 1-highest timeline 1 of the primary is behind recovery timeline 11
slave 2-highest timeline 1 of the primary is behind recovery timeline 10
slave 3-highest timeline 1 of the primary is behind recovery timeline 3
I tried deleting pg_xlog files in slaves and copying all the files from master pg_xlog into the slaves and then did a rsync.
i also did a pg_rewind but it says:
target server needs to use either data checksums or wal_log_hints = on
(I have wal_log_hints = on set in postgresql.conf already)
I've tried doing a pg_basebackup but since the data base server in slaves are still starting up its not able to connect to the server
Is there any way to bring the master and the slave at a same timeline?
In my case, it happened because ( experimentally ), I updated the standby database tables and again when I simulate the master-standby streaming replication I got the same errors.
So once again I cleaned the whole standby database directory and migrate the master database using cmd like
"pg_basebackup -P -R -X stream -c fast -h 10.10.40.105 -U postgres -D standby/"
I think something is wrong in your pgpool configuration. What tool you have been using for manement of replication and master-slave control? Is it post master or repmgr?
I was trying to configure pgpool with 3 data nodes using a tutorial from http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool and have done it correctly.
Also you can lean auto failover here.
(These question is obviously duplicate of this one, so I'll repeat the answer also.)
I'm not sure what you exactly mean by "when i switched back to my original master", but it looks that you are doing the wrongest possible thing in PostgreSQL streaming replication - introducing the second master.
The most important thing you should know about PostgreSQL replication is that once the failover is performed, you cannot simply "switch back to original master" - there's now a new master in cluster, and existence of two masters will make damage.
After a slave is promoted to master, the only way for you to re-join the old master is to:
Destroy it (delete the data directory);
Join it as a slave.
If you want it to be master again you'll continue with the following:
Let it run for awhile as a slave so that it can sync the data;
Kill temporary master and failover to old master;
Rejoin temporary master again as a slave.
You cannot simply switch master servers! Master can be created ONLY by failover (promoting a slave)
You should also know that whenever you are performing failover (whenever the master is changed), all slaves (except for the one that is promoted) need to be reconfigured to target the new master.
I suggest you reading this tutorial - it'll help.

pgpool Setup on Database Server

I have three servers. One is running pgpool, another two in master-slave mode streaming replication. When installing pgpool, I was suggested to install the pgpool_regclass on my database servers as well. There's no problem installing it in the master node, but when I tried to do the same in the slave, I got error ERROR: cannot execute CREATE EXTENSION in a read-only transaction.
I think it's because the slave is a hot standby, and SELECT pg_is_in_recovery(); returns true. So I wonder am I supposed to install pgpool_regclass on the slave or not. It seems not, but pgpool doc says I should install it on every database pgpool is going to access.
I found the cause. Delete the recovery.conf file in the slave database, and then run pgpool_regclass. Otherwise, the slave is in recovery mode and cannot execute write commands.

Is it possible to have a Heroku Postgres DB replicate down to a slave DB on my laptop?

I'd like to have my master Postgres DB, which is is hosted on Heroku, replicate down to a slave DB on my laptop. Is this possible?
Heroku's documentation talks about both master and slave hosted within Heroku:
https://devcenter.heroku.com/articles/heroku-postgres-follower-databases
Someone else asked whether it's possible to have the master outside Heroku and a slave inside Heroku (it's not):
Follow external database from Heroku
I haven't seen an answer for the reverse -- having the master in Heroku and the slave outside.
Why do I want this? To speed up development. With my app running locally and the DB in the cloud, the round-trip is long so data access is slow. Most data access is read-only. If I could have a local slave, it would speed things up significantly.
Related: what if my laptop is disconnected for a while? Would that cause problems for the master?
You cannot make a follower (slave) outside of the Heroku network – followers need superuser access to create, which Heroku Postgres doesn't provide you, so you are limited to running a follower on Heroku.
If you want to pull down a copy locally for use/inspection, you can do so with pgbackups: https://devcenter.heroku.com/articles/heroku-postgres-import-export
I'd highly recommend the program Parity for this.
It copies down the last Heroku backup to your local machine with a nice command line interface:
development restore production
I'd rather just pull the production database's contents from Heroku every now and then.
$ heroku db:pull
You can speed that up with a rake task.
# lib/tasks/deployment.rake
namespace :production do
desc 'Pull the production DB to the local env'
task :pull_db do
puts 'Pulling PRODUCTION db to local...'
system 'heroku db:pull --remote MY_REMOTE_NAME --confirm MY_APP_NAME'
puts 'Pulled production db to local'
end
end
You can call rake production:pull_db to overwrite your local development database.

Incremental backups from server to local machine

My live site is using mongodb to store user activities on the site.
I am having a single server running monogdb. I cant afford a second server for master slave replication.
my problem is i want to take the dump of server's mongodb database everyday and restore it to my local machine so that i can query on my local machine.I know how to dump and restore but the issue is every day i have to dump the entire database from server and restore it from the scratch in my local machine ..it takes a lot of time.
so my question is ..is there any way to have incremental backup in mongodb so that i have to dump and restore only single day data as it will take less time.
i do not know much about mongodb, but i have an idea.
i think you can introduce your local mongodb instance as a slave of master production db, and make slave only writable if possible, for preventing live system making selects from your local.
this way can work because slaves keeps track of master writes and deletes and try to make themselves as a copy of master.
And there is a good reason to do that is a slave doesn't have to be online always, when it becomes online, slave will check masters list (this list lenght like 1hour or 1 day is configurable at master) and copy datas from master as quick as possible.
Once you dump master to your local, then you can backup your data twice a day with this method i think.