root cant perform listCollections command on db - mongodb

I have credentials for root user and I am using those credentials to automate db backup. Main aim is to create prototype for Automated DB backup and, for simplicity, I am using root. The script (I borrowed from article) looks like as follows:
#!/bin/bash
#Force file syncronization and lock writes
mongo admin -u "root" -p "root" --eval "printjson(db.fsyncLock())"
MONGODUMP_PATH="/usr/bin/mongodump"
MONGO_DATABASE="mydb" #replace with your database name
TIMESTAMP=`date +%F-%H%M`
S3_BUCKET_NAME="mydb" #replace with your bucket name on Amazon S3
S3_BUCKET_PATH="backup/mongo"
# Create backup
$MONGODUMP_PATH -d $MONGO_DATABASE
# Add timestamp to backup
mv dump mongodb-$HOSTNAME-$TIMESTAMP
tar cf mongodb-$HOSTNAME-$TIMESTAMP.tar mongodb-$HOSTNAME-$TIMESTAMP
# Upload to S3
s3cmd put mongodb-$HOSTNAME-$TIMESTAMP.tar s3://$S3_BUCKET_NAME/$S3_BUCKET_PATH/mongodb-$HOSTNAME-$TIMESTAMP.tar
#Unlock database writes
mongo admin -u "root" -p "root" --eval "printjson(db.fsyncUnlock())"
#Delete local files
#rm -rf mongodb-*
I am getting following error:
Failed: error getting collections for database mydb: error running
listCollections. Database: mydb Err: not authorized on mydb to
execute command { listCollections: 1, cursor: {} }
Isnt root has all the access over all the databases? I am bit scared that I might run into situation where I am thinking to supersede something with root but It doesnt have the permission. This is the root cause of posting question. I want to avoid surprises like this in the future.

The error mentioned above was misleading for me. But my script was wrong or incomplete.
If you look at following line in my script:
$MONGODUMP_PATH -d $MONGO_DATABASE
I am not providing any user in Mongodump command and hence the error message. If I rewrite that line something as follows:
$MONGODUMP_PATH -d $MONGO_DATABASE --authenticationDatabase "admin" -u "dbowner" -p "pwd"
then error goes away.

Related

execute sql files from a folder in postgres

I have a set of sql files placed in a folder in postgres docker image. Whenever a new account is created, I want a new database to be created and all these scripts should get executed on that new created database.
New database will be created manually. I need to create sh file to ask for db name, host, port, password etc and then execute all sql files on new created database.
I tried and I can execute one single file
psql -h localhost -d sampledb -U superuser -f sample.sql
I need to execute all files in the folder. How I can achieve this.
Here, you can make use of small bash script.
Copy all your sql files to a folder, for example test.
Then execute the below line of script.
for i in $(ls -l test/*.sql |awk '{print $NF}'); do
psql -h localhost -d sampledb -U superuser -f $i;
done

pgrouting error when trying to dump data into database

I was just following this tutorial HERE, its about, pgrouting, When I run the following command:
psql -U user -d postgres -f ~/Desktop/pgrouting-workshop/data/sampledata_routing.sql
I get an error saying the following:
/var/lib/postgresql/Desktop/pgrouting-workshop/data/sampledata_routing.sql: No such file or directory
On my desktop I do have a folder pgrouting-workshop, which does contain the folder data and the sql dump file.
So why am I getting this error?
Because your Desktop isn't in the postgres user's home directory, located at /var/lib/postgresql, but is instead located at /home/myusername/Desktop?
Presumably the psql command you're running is under a sudo -u postgres -i shell, so ~/ means the postgres user's home directory.
Use ~myusername/Desktop/blahblah. Note that the postgres user may not have permission to access it; you can chmod go+x ~ ~/Desktop (run as your user, not postgres) to change that.

Restoring .dump file - "Permission Denied"

I am setting up a website and am having some trouble restoring a database .dump file. I am using centos7, selinux, postgresql 9.4, and apache2.
This is my pg_hba.conf file.
This is the command I am trying to move the dump:
psql --single-transaction -U postgres db_name < dump_location
When I do this, I get the error:
Permission denied.
Am I missing something or is there someway I should alter my settings? Let me know if you need more information.
Thank you!
The operating system user you are running your shell as does not have permission to read the path dump_location.
Note that this is not necessarily the operating system user you run psql as. In a statement like:
sudo -u postgres psql mydb < /some/path
then /some/path is read as the current user, before sudo, not as user postgres, because it's the shell that performs the input redirection, not psql.
If, in the above example, you wanted to read the file as user postgres you would:
sudo -u postgres psql -f /some/path mydb
That instructs psql to open and read /some/path when it's started.
Just make sure that you are using correct database user and you have at least read permission on the dump file.
"psql -d -U postgres -f "
will work.

Can't import to heroku postgres database from dump

Sorry if it is a duplicate, but I tried to find an answer here, and nothing helped.
So I've read heroku articles like this and this. I was able to save a dump file, which I've created with pg:backups capture command. Uploaded it to s3 and tried to restore it with:
heroku pg:backups restore DATABASE 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump'
But it just do not work! In console it logs:
Unknown database: https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump. Valid options are: DATABASE_URL, HEROKU_POSTGRESQL_SILVER_URL
Tried listed options instead of DATABASE, but with the same result. Also I've tried other hosting, but with the same result, again. I also tried to restore it from other app, like this:
heroku pg:backups restore myapp::b001 HEROKU_POSTGRESQL_SILVER --app myapp-cedar
But it logs with Backup oncampus::b001 not found. However, command heroku pg:backups --app myapp shows that it is present.
=== Backups
ID Backup Time Status Size Database
---- ------------------------- ---------------------------------- ------ --------
b001 2015-03-13 18:10:14 +0000 Finished 2015-03-13 18:10:22 +0000 9.71MB ORANGE
Don't know what to do now. Just hope someone will help me.
The order of arguments to the command is significant. In the first example above, you have heroku pg:backups restore DATABASE 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump', but the command expects the reference FIRST and the db to load into second, which would give heroku pg:backups restore 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump' DATABASE instead. I think in the new stuff ID may be preferred to URL, but URL ought to work as long as that URL is accessible. Hope that helps, otherwise let me know and we can try some other stuff.
prepare two sh files as below
backup.sh
NOWDATE=`date +%Y-%m-%d`
BACKUPNAME=$NOWDATE.dump
export PGPASSWORD='<password>'
echo “Creating backup of database to $BACKUPNAME”
/usr/bin/pg_dump --host <urhostname> --port 5432 --username "<user>" --role "<role>" --no-password --format tar --blobs --verbose --file "./$BACKUPNAME" "dbname"
echo “Succesfully created database backup”
echo “Uploading backup to Amazon S3 bucket…”
s3cmd put $BACKUPNAME s3://path/to/$BACKUPNAME
echo “Successfully uploaded backup to S3″
echo “Deleting backup file…”
rm $BACKUPNAME
echo “Done”
restore.sh
NOWDATE=`date +%Y-%m-%d`
BACKUPNAME=$NOWDATE.dump
echo “downloading the file $BACKUPNAME”
s3cmd get s3://path/to/$BACKUPNAME
echo “Succesfully downloaded”
echo “Will restore it Please wait “
export PGPASSWORD='<password>'
pg_restore --host <host> --port 5432 --username "<user>" --dbname "<databasename>" --role "<role>" --no-password --verbose "./$BACKUPNAME"
echo “restoring the file $BACKUPNAME”
echo “Deleting backup file…”
rm -r $BACKUPNAME
echo “Done”
you need to configure s3cmd credentials hope it helps!
geemus correctly solved Danny Ocean's problem.
The order does affect and the DATABASE_URL goes at the end not at the beginning.
Just wanted to leave the link to this answer which I believe could help other users with a similar problem.

Is there a simple way to export the data from a meteor deployed app?

Is there a simple way to export the data from a meteor deployed app?
So, for example, if I had deployed an app named test.meteor.com...
How could I easily download the data that has been collected by that app - so that I could run it locally with data from the deployed app?
To get the URL for your deployed site at meteor.com use the command (you may need to provide your site password if you password protected it):
meteor mongo --url YOURSITE.meteor.com
Which will return something like :
mongodb://client:PASSWORD#sky.member1.mongolayer.com:27017/YOURSITE_meteor_com
Which you can give to a program like mongodump
mongodump -u client -h sky.member1.mongolayer.com:27017 -d YOURSITE_meteor_com\
-p PASSWORD
The password is only good for one minute. For usage:
$ meteor --help mongo
And here's how to do the opposite: (uploading your local monogo db to meteor)
https://gist.github.com/IslamMagdy/5519514
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3002 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h c0.meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -p 'password' folder/
Based on Kasper Souren's solution I created an updated script that works with current versions of Meteor and also works when you protect your remote Meteor app with a password.
Please create the following script parse-mongo-url.coffee:
spawn = require('child_process').spawn
mongo = spawn 'meteor', ['mongo', '--url', 'YOURPROJECT.meteor.com'], stdio: [process.stdin, 'pipe', process.stderr]
mongo.stdout.on 'data', (data) ->
data = data.toString()
m = data.match /mongodb:\/\/([^:]+):([^#]+)#([^:]+):27017\/([^\/]+)/
if m?
process.stdout.write "-u #{m[1]} -p #{m[2]} -h #{m[3]} -d #{m[4]}"
else
if data == 'Password: '
process.stderr.write data
Then execute it like this in a *nix shell:
mongodump `coffee parse-mongo-url.coffee`
I have created a tool, mmongo, that wraps all the Mongo DB client shell commands for convenient use on a Meteor database. If you use npm (Node Package Manager), you can install it with:
npm install -g mmongo
Otherwise, see README.
To back up your Meteor database, you can now do:
mmongo test.meteor.com dump
To upload it to your local development meteor would be:
mmongo restore dump/test_meteor_com
And if you accidentally delete your production database:
mmongo test.meteor.com --eval 'db.dropDatabase()' # whoops!
You can easily restore it:
mmongo test.meteor.com restore dump/test_meteor_com
If you'd rather export a collection (say tasks) to something readable:
mmongo test.meteor.com export -c tasks -o tasks.json
Then you can open up tasks.json in your text editor, do some changes and insert the changes with:
mmongo test.meteor.com import tasks.json -c tasks --upsert
Github, NPM
I suppose your data is in a mongodb database, so if that's the case, the question is more mongo-related than meteor. You may take a look at mongoexport and mongoimport command line tools.
Edit (for example):
mongoexport -h flame.mongohq.com:12345 -u my_user -p my_pwd -d my_db -c my_coll
You need to install mongodb on your computer to have this command line tool, and obviously you need your mongodb informations. In the above example, I connect to MongoHQ (flame.mongohq.com is the host, '12345' is the port of your mongo server), but I don't know which Mongo host is actually used by the meteor hosting. If you tried the Meteor examples (TODOs, Leaderboard, etc.) locally, chances are you already installed Mongo, since it uses a local server by default.
Here is another solution in bash
#! /bin/bash
# inspired by http://stackoverflow.com/questions/11353547/bash-string-extraction-manipulation
# http://www.davidpashley.com/articles/writing-robust-shell-scripts/
set -o nounset
set -o errexit
set -o pipefail
set -x
# stackoverflow.com/questions/7216358/date-command-on-os-x-doesnt-have-iso-8601-i-option
function nowString {
date -u +"%Y-%m-%dT%H:%M:%SZ"
}
NOW=$(nowString)
# prod_url="mongodb://...:...#...:.../..."
prod_pattern="mongodb://([^:]+):([^#]+)#([^:]+):([^/]+)/(.*)"
prod_url=$(meteor mongo katapoolt --url | tr -d '\n')
[[ ${prod_url} =~ ${prod_pattern} ]]
PROD_USER="${BASH_REMATCH[1]}"
PROD_PASSWORD="${BASH_REMATCH[2]}"
PROD_HOST="${BASH_REMATCH[3]}"
PROD_PORT="${BASH_REMATCH[4]}"
PROD_DB="${BASH_REMATCH[5]}"
PROD_DUMP_DIR=dumps/${NOW}
mkdir -p dumps
# local_url="mongodb://...:.../..."
local_pattern="mongodb://([^:]+):([^/]+)/(.*)"
local_url=$(meteor mongo --url | tr -d '\n')
[[ ${local_url} =~ ${local_pattern} ]]
LOCAL_HOST="${BASH_REMATCH[1]}"
LOCAL_PORT="${BASH_REMATCH[2]}"
LOCAL_DB="${BASH_REMATCH[3]}"
mongodump --host ${PROD_HOST} --port ${PROD_PORT} --username ${PROD_USER} --password ${PROD_PASSWORD} --db ${PROD_DB} --out ${PROD_DUMP_DIR}
mongorestore --port ${LOCAL_PORT} --host ${LOCAL_HOST} --db ${LOCAL_DB} ${PROD_DUMP_DIR}/${PROD_DB}
meteor-backup is by far the easiest way to do this.
sudo npm install -g meteor-db-utils
meteor-backup [domain] [collection...]
As of March 2015 you still need to specify all collections you want to fetch though (until this issue is resolved).
Stuff from the past below
I'm doing
mongodump $(meteor mongo -U example.meteor.com | coffee url2args.cfee)
together with this little coffeescript, with a mangled extension in order not to confuse Meteor, url2args.cfee:
stdin = process.openStdin()
stdin.setEncoding 'utf8'
stdin.on 'data', (input) ->
m = input.match /mongodb:\/\/(\w+):((\w+-)+\w+)#((\w+\.)+\w+):27017\/(\w+)/
console.log "-u #{m[1]} -h #{m[4]} -p #{m[2]} -d #{m[6]}"
(it would be nicer if meteor mongo -U --mongodumpoptions would give these options, or if mongodump would accept the mongo:// URL)
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3001 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p 'password' folder/
While uploading local db to remote db, having an assertion Exception
shubham#shubham-PC:$ mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p my_password local/
2015-04-22T16:37:38.504+0530 Assertion failure _setName.size() src/mongo/client/dbclientinterface.h 219
2015-04-22T16:37:38.506+0530 0xdcc299 0xd6c7c8 0xd4bfd2 0x663468 0x65d82e 0x605f98 0x606442 0x7f5d102f8ec5 0x60af41
mongorestore(_ZN5mongo15printStackTraceERSo+0x39) [0xdcc299]
mongorestore(_ZN5mongo10logContextEPKc+0x198) [0xd6c7c8]
mongorestore(_ZN5mongo12verifyFailedEPKcS1_j+0x102) [0xd4bfd2]
mongorestore(_ZN5mongo16ConnectionStringC2ENS0_14ConnectionTypeERKSsS3_+0x1c8) [0x663468]
mongorestore(_ZN5mongo16ConnectionString5parseERKSsRSs+0x1ce) [0x65d82e]
mongorestore(_ZN5mongo4Tool4mainEiPPcS2_+0x2c8) [0x605f98]
mongorestore(main+0x42) [0x606442]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f5d102f8ec5]
mongorestore() [0x60af41]
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/client/dbclientinterface.h:219
Aborted (core dumped)
I made this simple Rakefile to copy the live db to local.
To restore the live db to my local machine I just do...
rake copy_live_db
Replace myapp with the name of your meteor.com - e.g myapp.meteor.com.
require 'rubygems'
require 'open-uri'
desc "Backup the live db to local ./dump folder"
task :backup_live_db do
uri = `meteor mongo myapp --url`
pass = uri.match(/client:([^#]+)#/)[1]
puts "Using live db password: #{pass}"
`mongodump -h meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -u client -p #{pass}`
end
desc "Copy live database to local"
task :copy_live_db => :backup_live_db do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
desc "Restore last backup"
task :restore do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
To use an existing local mongodb database on your meteor deploy myAppName site, you need to dump, then restore the mongodb.
Follow the instructions above to mongodump (remember the path) and then run the following to generate your 'mongorestore' (replaces the second step and copy/pasting):
CMD=meteor mongo -U myAppName.meteor.com | tail -1 | sed 's_mongodb://\([a-z0-9\-]*\):\([a-f0-9\-]*\)#\(.*\)/\(.*\)_mongorestore -u \1 -p \2 -h \3 -d \4_'
then
$CMD /path/to/dump
From Can mongorestore take a single url argument instead of separate arguments?
I think you can use a remotely mounted file system via sshfs and then rsync to synchronize the mongodb's folder itself or your entire Meteor folder I believe as well. This is like doing an incremental backup and potentially more efficient.
It's possible to use the same solution for sending changes of your code, etc. so why not get you database changes back at the same time too?! (killing 2 birds with 1 stone)
Here is a simple bash script that lets you dump your database from meteor.com hosted sites.
#!/bin/bash
site="rankz.meteor.com"
name="$(meteor mongo --url $site)"
echo $name
IFS='#' read -a mongoString <<< "$name"
echo "HEAD: ${mongoString[0]}"
echo "TAIL: ${mongoString[1]}"
IFS=':' read -a pwd <<< "${mongoString[0]}"
echo "${pwd[1]}"
echo "${pwd[1]:2}"
echo "${pwd[2]}"
IFS='/' read -a site <<< "${mongoString[1]}"
echo "${site[0]}"
echo "${site[1]}"
mongodump -u ${pwd[1]:2} -h ${site[0]} -d ${site[1]}\
-p ${pwd[2]}