Simultaneous mongo instances no longer possible? - mongodb

I have a production and a development server. The production server is running a mongod and my development server is running 2 instances:
1 - as "slave" for production (using a replicaSet). This duplicates my data and allows for easier backups.
2 - a "master". My development collections (couldn't use the slave instance, regarding slaveOkay etc).
They both have their own pid file, data folder, everything. This has been working without much issues for well over a year.
Unfortunately since the last version it seems that whenever I start the one instance, it terminates the other one (prod slave <> dev master). No matter which one gets started first, the other one is always stopped.
Anyone any idea why mongo behaves like this all of a sudden and a solution for the problem?
Using the master instance to house the development collections is not really an option for me for various reasons.
Hope this makes things a bit more clear:
production writes --> production [master] --[replicaSet]--> development instance 1 [slave]
development writes --> development instance 2 [master]
Thanks!

Answering my own question for posterity:
It seems the init script for mongo version 2.4.6 has a peculiarity in it. It determines the PID file with the following expression:
PIDFILE=`awk -F= '/^dbpath\s=\s/{print $2}' "$CONFIGFILE"`
This looks for the dbpath with a whitespace before and after the equal sign.
Since the default config file contains "dbpath=/var/lib/mongo", the result of this expression is empty.
I imagine the init script - without any specific .pid file being given - just uses the default .pid file location. Normally (with one instance) this has no consequences. In my case it causes the other mongo instance to be terminated.
I've always used the "pidfilepath" config directive up to now (don't recall if this was already in there when I first installed it or not). So basically I've updated my init scripts to (and note the missing "\s"'es)
PIDFILE=`awk -F= '/^pidfilepath=/{print $2}' "$CONFIGFILE"`
Then I made sure the config statements didn't have spaces:
pidfilepath=/var/run/mongo/mongod-name.pid
This solves my problem. I hope it does the same for anyone else in the same situation.
Why use the dbpath for the pid file anyway?

Related

pg_create_logical_replication_slot hanging indefinitely due to old walsender process

I am testing logical replication between 2 PostgreSQL 11 databases for use on our production (I was able to set it thanks to this answer - PostgreSQL logical replication - create subscription hangs) and it worked well.
Now I am testing scripts and procedure which would set it automatically on production databases but I am facing strange problem with logical replication slots.
I had to restart logical replica due to some changes in setting requiring restart - which of course could happen on replicas also in the future. But logical replication slot on master did not disconnect and it is still active for certain PID.
I dropped subscription on master (I am still only testing) and tried to repeat the whole process with new logical replication slot but I am facing strange situation.
I cannot create new logical replication slot with the new name. Process running on the old logical replication slot is still active and showing wait_event_type=Lock and wait_event=transaction.
When I try to use pg_create_logical_replication_slot to create new logical replication slot I get similar situation. New slot is created - I see it in pg_catalog but it is marked as active for the PID of the session which issued this command and command hangs indefinitely. When I check processes I can see this command active with same waiting values Lock/transaction.
I tried to activate parameter "lock_timeout" in postgresql.conf and reload configuration but it did not help.
Killing that old hanging process will most likely bring down the whole postgres because it is "walsender" process. It is visible in processes list still with IP of replica with status "idle wating".
I tried to find some parameter(s) which could help me to force postgres to stop this walsender. But settings wal_keep_segments or wal_sender_timeout did not change anything. I even tried to stop replica for longer time - no effect.
Is there some way to do something with this situation without restarting the whole postgres? Like forcing timeout for walsender or lock for transaction etc...
Because if something like this happens on production I would not be able to use restart or any other "brute force". Thanks...
UPDATE:
"Walsender" process "died out" after some time but log does not show anything about it so I do not know when exactly it happened. I can only guess it depends on tcp_keepalives_* parameters. Default on Debian 9 is 2 hours to keep idle process. So I tried to set these parameters in postgresql.conf and will see in following tests.
Strangely enough today everything works without any problems and no matter how I try to simulate yesterday's problems I cannot. Maybe there were some network communication problems in the cloud datacenter involved - we experienced some occasional timeouts in connections into other databases too.
So I really do not know the answer except for "wait until walsender process on master dies" - which can most likely be influenced by tcp_keepalives_* settings. Therefore I recommend to set them to some reasonable values in postgresql.conf because defaults on OS are usually too big.
Actually we use it on our big analytical databases (set both on PostgreSQL and OS) because of similar problems. Golang and nodejs programs calculating statistics from time to time failed to recognize that database session ended or died out in some cases and were hanging until OS ended the connection after 2 hours (default on Debian). All of it seemed to be always connected with network communication problems. With proper tcp_keepalives_* setting reaction is much quicker in case of problems.
After old walsender process dies on master you can repeat all steps and it should work. So looks like I just had bad luck yesterday...

How to use sidekiq on Bluemix

I'm trying to use sidekiq on Bluemix. I think that I'm on the right track, but it's not working completely.
I have an app with Sinatra that uses sidekiq jobs to make many actions. I set the following line in my manifest.yml file:
command: bundle exec rackup config.ru -p $PORT && bundle exec sidekiq -r ./server.rb -c 3
I thought that with this command sidekiq will run, However, when I call the endpoint that creates a job, it's still on the "Queue" section on the Sidekiq panel.
What actions do I need to take to get sidekiq to process the job?
PS: I'm beginner on Bluemix. I'm trying to migrate my app from Heroku to Bluemix.
Straightforward answer to this question "as asked":
Your start-up command does not evaluate a second part, the one after '&&'. If you try starting that in your local environment, the result'll be the same. The server will start up and the console will simply tail the server logs, not technically evaluating to true until you send it a kill signal (so the part after '&&' will never run at the same time).
Subbing that with just '&' sort-of-kinda fixes that, since both will run at the same time.
command: bundle exec rackup config.ru -p $PORT & bundle exec sidekiq
What is not ideal with that solution? Eh, probably a lot of stuff. The biggest offender though: having two processes active at the same time, only one of them expected and observed ( the second one ).
Sending '(bluemix) cf stop' to the application instance created by the manifest with this command stops only the observed one before decommissioning the instance - in that case we can not be sure that the first process freed up external resources by properly sending notifications or closing the connections, or whatever.
What you probably could consider instead:
1. Point one.
Bluemix is a CF implementation, and with a quick manifest.yml deploy, there is nothing preventing you from having the app server and sidekiq workers run on separate instances.
2. Better shell.
command: sh -c 'command1 & command2 & wait'
**3. TBD, probably a lot of options, but I am a beginner as well. **
Separate app instances on CloudFoundry for your rack-based application and your workers would be preferable because you can then:
Scale web / workers independently (more traffic? Just scale the web application)
Deploy each component independently, if needed
Make sure each process is health-checked
The downside of using & to join commands, as suggested in the other answer, is that the first process will launch in the background. This means you won't have reliable monitoring and automatic restarts if the first process crashes.
There's a slightly out of date example on the CloudFoundry website which demos using two application manifests (one for web, one for workers) to deploy each part independently.

How does chef-solo --daemonize work, and what's the point?

I understand the purpose of chef-client --daemonize, because it's a service that Chef Server can connect to and interact with.
But chef-solo is a command that simply brings the current system inline with specifications and then is done.
So what is the point of chef-solo --daemonize, and what specifically does it do? For example, does it autodetect when the system falls out of line with spec? Does it do so via polling or tapping into filesystem events? How does it behave if you update the cookbooks and node files it depends on when it's already running?
You might also ask why chef-solo supports the --splay and --interval arguments.
Don't forget that chef-server is not the only source of data.
Configuration values can rely on a bunch of other sources (APIs, OHAI, DNS...).
The most classic one is OHAI - think of a cookbook that configures memcached. You would probably want to keep X amount of RAM for the operating system and the rest goes to memcached.
Available RAM can be changed when running inside a VM, even without rebooting it.
That might be a good reason to run chef-solo as a daemon with frequent chef-runs, like you're used to when using chef-client with a chef-server.
As for your other questions:
Q: Does it autodetect when the system falls out of line with spec?
Does it do so via polling or tapping into filesystem events?
A: Chef doesn't respond to changes. Instead, it runs frequently and makes sure the current state is in sync with the desired state - which can be based on chef-server inventory, API calls, OHAI attributes, etc. The desired state is constructed from scratch every time you run Chef, at the compile stage when all the resources are generated. Read about it here
Q: How does it behave if you update the cookbooks and node files it depends on when it's already running?
A: Usually when running chef-solo, one uses the --json flag to specify a JSON file with node attributes and a run-list. When running in --daemonize mode with chef-solo, the node attributes are read only for the first run. For the rest of the runs, it's as if you were running it without a --json flag. I couldn't figure out a way to make it work as if you were running it with --json all over again, however, you can use the --override-runlist option to at least make the runlist stick.
Note that the attributes you're specifying in your JSON won't make it past the first run. This is possibly a bug.

Why is the config server string order sensitive when launching mongos?

and thanks in advance for your time.
For a given sharded setup, mongos is launched while specifying the config server(s) to talk to. Say we start with the following mongos option:
--configdb=cf1,cf2,cf3
Everything is all fine and dandy. If you were to relaunch mongos (or launch a different mongos) with:
-- configdb=cf3,cf2,cf1
It results in the following error:
Tue Jul 9 23:32:41 uncaught exception: error: { "$err" : "could not initialize sharding on connection rs1/db1.me.net:27017,db2.me.net:27017,db3.me.net:27017, :: caused by :: mongos specified a different config database string : stored :cfg1:27017,cfg2:27017,cfg3:27017 vs given :cfg3:27017,cfg2:27017,cfg1:27017","code" : 15907}
My question is, what is the reasoning mongo sensitive to the order of the config server string? I would imagine at some point it parses the different servers hostnames/port, so why not just compare the set? I know you can see from the source code that its just a string comparison, but my question is the underlying reason for this.
Some context to this problem: I am using chef for my mongo deployments. We recently went through the exercise of migrating the config server with the same hostname. However, this still ended up being a disruptive process because the order the chef picked up the config servers had changed, thus changing order mongos starts its process with. I understand that this issue is directly because of chefs functionality, but I am curious as to why Mongo is not this flexible.
Thanks again for your time.
When mongos process changes metadata for sharded cluster, it has to change it in all three config servers "simultaneously" (i.e. all three must agree in order to have a valid metadata change).
If the system were to go down in the middle of such a metadata change, if the config database order was not fixed, there would be a lot more possible permutations of incorrect states to unwind. Requiring a fixed sequence of config dbs allows (a) simpler checking of whether all members of the cluster are viewing the same metadata (b) significant reduction of possible states when a system crashes or otherwise stops unexpectedly.
In addition it reduces chances for "race condition" sorts of bugs if different mongos' could initiate the same operations on different config servers. Even as simple a change as mongos process taking a "virtual" distributed lock to see if a chance is necessary - how could you handle the case of different mongos' checking config servers in different order to check on (and take out) the lock?
As a summary, the three config servers are not a replica set, but one of them still has to be the one that always accepts the changes "first" - think of the order of configdbs to mongos as designation of such "first" status.

How to scale MongoDB?

I know that MongoDB can scale vertically. What about if I am running out of disk?
I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size.
What if the MongoDB growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files?
Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.
If you're running out of disk, you obviously need to get a bigger disk.
There are several ways to migrate your data, it really depends on the type of up-time you need. First steps of course involve bundling the machine and creating the new volume.
These tips go from easiest to hardest.
Can you take the database completely off-line for several minutes?
If so, do this (migration by copy):
Mount new EBS on the server.
Stop your app from connecting to Mongo.
Shut down mongod and wait for everything to write (check the logs)
Copy all of the data files (and probably the logs) to the new EBS volume.
While the copy is happening, update your mongod start script (or config file) to point to the new volume.
Start mongod and check connection
Restart your app.
Can you take the database off-line for just a few minutes?
If so, do this (slaving and switch):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a --slave pointing at the current database. (you may need to re-start the current as --master)
The slave will do a fresh synchronization. Once the slave is up-to-date, you'll do a "switch" (next steps).
Turn off writes from the system.
Shut down the original mongod process.
Re-start the "new" mongod as a master instead of the slave.
Re-activate system writes pointing at the new master.
Done correctly those last three steps can happen in minutes or even seconds.
Can you not afford any down-time?
If so, do this (master-master):
Start up a new instance and mount the new EBS on that server.
Install / start mongod as a master and a slave against the current database. (may need to re-start current as master, minimal down-time?)
The new computer should do a fresh synchronization.
Once the new computer is up-to-date, switch the system to point at the new server.
I know it seems like this last version is actually the best, but it can be a little dicey (as of this writing). The reason is simply that I've honestly had a lot of issues with "Master-Master" replication, especially if you don't start with both active.
If you plan on using this method, I highly suggest a smaller practice run first. If something bombs here, Mongo might simply wipe all of your data files which will have the effect of taking more stuff down.
If you get a good version of this please post the commands, I'd like to see it in action.
Doesn't the E in EBS stand for elastic meaning something like resizing on the fly?
Currently the MongoDB team is working on finishining sharding which will allow you horizontal scaling by partitioning data separately on different servers. Give it a month or two and it will work fine. The developers are quite good at keeping their promises.
http://api.mongodb.org/wiki/current/Sharding%20Introduction.html
http://api.mongodb.org/wiki/current/Sharding%20Limits.html
You could slave the bigger disk off the smaller until it's caught up
or
fsync+lock and take a file system snapshot and copy it onto the bigger disk.
well, I am using Mongo DB now. I am pretty amazed the performance it generated, especially on some simple sorting.
I believe it's a good tool for simple web application logic. The remaining concern for is how to scale and backup. I will continue to explore.
The only disadvantage I have is that I didn't have any good tools to reveal the data stored inside. For example, I want to put my logging from MYSQL into Mongo as well. However, it's pretty difficult for me to view the log. Previously, i can use MYSQL query to fetch what I want easily.
Anyway, it's a good tool and I will continue to use it.