Mongodb interrupted while reindexing - mongodb

I have a collection with about 3,000,000 entries that I need to reindex. This whole thing began when I tried to add a 2d index. To do this, I created an ssh tunnel, opened the mongo shell and tried to use ensureIndex. I'm in a place with a somewhat unreliable internet connection, and an hour in it ended up breaking the pipe. I then tunneled back in, opened the mongo shell and tried to look at the number of indexes using getIndexes; the new index I created showed up, but I wasn't confident it had finished, so I decided to use reIndex. In retrospect, this was stupid. The pipe broke again. Now when I open the shell and try to issue getIndexes, the shell doesn't respond.
So what should I do? Do I need to repair my database? Can I issue reIndex when I have a more reliable internet connection? Is there a way to issue reIndex without keeping the shell open, but without doing it in the background and having it take eons? (I'll check the mongod shell options to see if I can find anything, then check the node.js mongo api so I can try running something as a service on server)
And also, if I end up running reIndex as a service on the server, is there any way to check if it's working? The most frustrating part of this right now is I have no idea if my database is ok, if reIndex is still running, etc. Any help would be much appreciated. Thanks.

You don't have a problem. Mongo runs commands and only stops them if you explicitly kill the operation (db.killOp()).
You do not need to wait for the index operation to finish!

Regarding the connection problems, try using the screen command.
It enables you to create a "persistent" screen - not in the way of disk persistence, but in the means of connection-loss.

Related

Datagrip: Create postgres index without waiting for execution

Is there a way to submit a command in datagrip to a database without keeping the connection open / asynchronously? I'm attempting to create indexes concurrently, but I'd also like to close my laptop.
My datagrip workflow:
Select column in a database, click 'modify column', and eventually run code such as:
create index concurrently batchdisbursements_updated_index
on de_testing.batchdisbursements (updated);
However, these run as background tasks and cancel if I exit datagrip.
However, these run as background tasks and cancel if I exit datagrip.
What if you close your laptop without exiting datagrip? Datagrip is probably actively sending a cancellation message to PostgreSQL when you exit it. If you just close the laptop, I doubt it will do that. In that case, PostgreSQL won't notice the client has gone away until it tries to send a message, at which point the index creation should already be done and committed.
But this is a fragile plan. I would ssh to the server, run screen (or one of the fancier variants), run psql in that, and create the indexes from there.

Can not stop postgres despite immediate stop

I have this issue that is driving me nuts. Despite all my efforts, I am not able to force my postgres server to shut down. I have followed those instructions : http://www.question-defense.com/2008/10/17/pg_ctl-server-does-not-shut-down-force-postgres-to-shutdown
but still, nothing happens and all I got in the shell is
waiting for server to shut down............................................................... failed
pg_ctl: server does not shut down
Any help much appreciated.
Update: Checking the logs, I have this recurring error :
LOG: checkpoints are occurring too frequently (25 seconds apart)
HINT: Consider increasing the configuration parameter "checkpoint_segments".
After giving it a lot of thoughts especially on the way I installed it at the first place, I realize that I set up the install so the daemon would launch postgres at the start of my machine. Thus, any manual killing would simply result in the recreation of those process by the same daemon.
To resolve this problem you need to stop the daemon from working using launchctl and remove a .plist file in your postgres directory.
Good luck if you face the same problem.
You probably run with the default setting of "checkpoint_segments = 3", that produces the warnings. Your database does many writes, right? It takes some time to write all of this to disk, and your database is quite busy rotating the logfiles, instead doing real work.
If you increase checkpint_segments, you will see performance improvements, and less I/O.
For further readings: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Error in mongodb: "getFile(): bad file number value (corrupt db?): run repair"

After my last Meteor upgrade my database became corrupted. First it started with this error message when I tried to create a new user (we're using meteor-accounts):
getFile(): bad file number value (corrupt db?): run repair
Then I saw in another question that I should run db.repairDatabase() but, although mongo shell said that the database was now ok, it didn't really work. The error message above was still showing up.
So I read something about corrupted indexes and dropped the indexes in the users collections and this obviously just made everything worse. Now I have two users with the same email address and Meteor doesn't start anymore:
MongoError: E11000 duplicate key error index: meteor.users.$emails.address_1 dup key: { : "thiago#gdeahj.com" }
When I try to remove one of these users, the original error shows up again:
meteor:PRIMARY> db.users.remove({ _id: "cAtu2XsEXTbqL2Wvx"})
getFile(): bad file number value (corrupt db?): run repair`
Fortunately we're still on the development phase and we can just drop the whole database and start over, but this has made me really insecure about running Meteor on production environment. Is there any way to fix a database in this state?
You can run db.repairDatabase to try to repair the data files - but read the linked page first for details and warnings. Make sure you run with journaling on if you didn't have it on before and, at least for production, run a replica set. Normally, in this situation it'd be preferable to resync from another replica set member or restore a backup rather than repair. You can find more information about data recovery in this article from the MongoDB Manual.

Starting mongo then leaving shell without shutting down the server

Hi Guys I start Mongo remotely on Putty SSH and after mongod, it says "listening for ports" but I can't then leave without shutting down the server. how do I do that?
On linux, look up the Screen module. There are other alternatives that do the same thing as well. It basically makes a saved session that you can reattach to later -- I use it extensively to run long tests/services that I can reattach to quickly.
The Starting and Stopping Mongo documentation article explains several options, one of which is using the --fork command line parameter to start mongod as a background process. Additionally, you can look into using service controls provided by your operating system (e.g. Windows services, init.d, upstart).
You can make a quick Mongo service script in either Windows or Linux.
You can also press CTRL+Z which will basically switch it to a background process. This basically gives you back temrinal (or at least does for me).
I also like "Screen". It is quite powerful and easy to navigate so I would personally give that a try.

Hard to think of a reason why MongoDB doesn't create /data/db for us automatically?

I installed MongoDB both on Win 7 and on Mac OS X, and both places, I got mongod (the server) and mongo (the client).
But at both places, running mongod will fail if I double click on the file, and the error message was gone too quickly before I can see anything. (was better on Mac because Terminal didn't exit automatically and showed the error message).
Turned out it was due to /data/db not exist and the QuickStart guide says: By default MongoDB will store data in /data/db, but it won't automatically create that directory
I just have a big question that MongoDB seems to want a lot of people using it (as do many other products), but why would it not automatically create the folder for you? If it didn't exist... creating it can do not much harm... especially you can state so in the user agreement. The question is why. I can think of one strange reason, but the reason may be too strange to list here...
One good reason would be that you do not want it in /data/db. In this case, you want it to fail with an error when you forgot to specify the correct directory on the command line. The same goes for mis-spelled directory names. If MongoDB just created a new directory and started to serve from there, that would not be very helpful. It would be quite confusing, because databases and collections are auto-created, so there would not even be errors when you try to access them.