Can't import to heroku postgres database from dump - postgresql

Sorry if it is a duplicate, but I tried to find an answer here, and nothing helped.
So I've read heroku articles like this and this. I was able to save a dump file, which I've created with pg:backups capture command. Uploaded it to s3 and tried to restore it with:
heroku pg:backups restore DATABASE 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump'
But it just do not work! In console it logs:
Unknown database: https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump. Valid options are: DATABASE_URL, HEROKU_POSTGRESQL_SILVER_URL
Tried listed options instead of DATABASE, but with the same result. Also I've tried other hosting, but with the same result, again. I also tried to restore it from other app, like this:
heroku pg:backups restore myapp::b001 HEROKU_POSTGRESQL_SILVER --app myapp-cedar
But it logs with Backup oncampus::b001 not found. However, command heroku pg:backups --app myapp shows that it is present.
=== Backups
ID Backup Time Status Size Database
---- ------------------------- ---------------------------------- ------ --------
b001 2015-03-13 18:10:14 +0000 Finished 2015-03-13 18:10:22 +0000 9.71MB ORANGE
Don't know what to do now. Just hope someone will help me.

The order of arguments to the command is significant. In the first example above, you have heroku pg:backups restore DATABASE 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump', but the command expects the reference FIRST and the db to load into second, which would give heroku pg:backups restore 'https://s3-eu-west-1.amazonaws.com/somebucket/uploads/tmp/b011.dump' DATABASE instead. I think in the new stuff ID may be preferred to URL, but URL ought to work as long as that URL is accessible. Hope that helps, otherwise let me know and we can try some other stuff.

prepare two sh files as below
backup.sh
NOWDATE=`date +%Y-%m-%d`
BACKUPNAME=$NOWDATE.dump
export PGPASSWORD='<password>'
echo “Creating backup of database to $BACKUPNAME”
/usr/bin/pg_dump --host <urhostname> --port 5432 --username "<user>" --role "<role>" --no-password --format tar --blobs --verbose --file "./$BACKUPNAME" "dbname"
echo “Succesfully created database backup”
echo “Uploading backup to Amazon S3 bucket…”
s3cmd put $BACKUPNAME s3://path/to/$BACKUPNAME
echo “Successfully uploaded backup to S3″
echo “Deleting backup file…”
rm $BACKUPNAME
echo “Done”
restore.sh
NOWDATE=`date +%Y-%m-%d`
BACKUPNAME=$NOWDATE.dump
echo “downloading the file $BACKUPNAME”
s3cmd get s3://path/to/$BACKUPNAME
echo “Succesfully downloaded”
echo “Will restore it Please wait “
export PGPASSWORD='<password>'
pg_restore --host <host> --port 5432 --username "<user>" --dbname "<databasename>" --role "<role>" --no-password --verbose "./$BACKUPNAME"
echo “restoring the file $BACKUPNAME”
echo “Deleting backup file…”
rm -r $BACKUPNAME
echo “Done”
you need to configure s3cmd credentials hope it helps!

geemus correctly solved Danny Ocean's problem.
The order does affect and the DATABASE_URL goes at the end not at the beginning.
Just wanted to leave the link to this answer which I believe could help other users with a similar problem.

Related

How to run a postgres command: could not identify current directory

I am able to run psql by doing the following:
Davids-d david$ psql --u postgres
Password for user postgres:
psql (9.4.18)
Type "help" for help.
postgres=#
However, when I run the following command, I get an error:
Davids-iMac:datadocs david$ sudo -u postgres psql -f resources/postgresql/initdb.sql
could not identify current directory: Permission denied
What does this mean, and how would I resolve this? Note that I do have the following var set:
david$ echo $PGDATA
/Users/david/PostgreSQL/data/pg94
The issue is the sudo -u postgres.
Your shell is running as you, but you're running the command as the postgres user. It does not have permission to see the file or even be in the current directory.
We can eliminate psql from the equation by just trying to read the file as the postgres user with sudo -u postgres cat resources/postgresql/initdb.sql. You should get the same error.
There's two things you have to do...
cd to a directory that the postgres user can be in.
Put the file in a place the postgres user can access.
/tmp, for example.
Your command seems wrong, try this:
sudo psql -U postgres -f resources/postgresql/initdb.sql

How to restore a Postgresdump while building a Docker image?

I'm trying to avoid touching a shared dev database in my workflow; to make this easier, I want to have Docker image definitions on my disk for the schemas I need. I'm stuck however at making a Dockerfile that will create a Postgres image with the dump already restored. My problem is that while the Docker image is being built, the Postgres server isn't running.
While messing around in the container in a shell, I tried starting the container manually, but I'm not sure what the proper way to do so. /docker-entrypoint.sh doesn't seem to do anything, and I can't figure out how to "correctly" start the server.
So what I need to do is:
start with "FROM postgres"
copy the dump file into the container
start the PG server
run psql to restore the dump file
kill the PG server
(Steps I don't know are in italics, the rest is easy.)
What I'd like to avoid is:
Running the restore manually into an existing container, the whole idea is to be able to switch between different databases without having to touch the application config.
Saving the restored image, I'd like to be able to rebuild the image for a database easily with a different dump. (Also it doesn't feel very Docker to have unrepeatable image builds.)
This can be done with the following Dockerfile by providing an example.pg dump file:
FROM postgres:9.6.16-alpine
LABEL maintainer="lu#cobrainer.com"
LABEL org="Cobrainer GmbH"
ARG PG_POSTGRES_PWD=postgres
ARG DBUSER=someuser
ARG DBUSER_PWD=P#ssw0rd
ARG DBNAME=sampledb
ARG DB_DUMP_FILE=example.pg
ENV POSTGRES_DB launchpad
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD ${PG_POSTGRES_PWD}
ENV PGDATA /pgdata
COPY wait-for-pg-isready.sh /tmp/wait-for-pg-isready.sh
COPY ${DB_DUMP_FILE} /tmp/pgdump.pg
RUN set -e && \
nohup bash -c "docker-entrypoint.sh postgres &" && \
/tmp/wait-for-pg-isready.sh && \
psql -U postgres -c "CREATE USER ${DBUSER} WITH SUPERUSER CREATEDB CREATEROLE ENCRYPTED PASSWORD '${DBUSER_PWD}';" && \
psql -U ${DBUSER} -d ${POSTGRES_DB} -c "CREATE DATABASE ${DBNAME} TEMPLATE template0;" && \
pg_restore -v --no-owner --role=${DBUSER} --exit-on-error -U ${DBUSER} -d ${DBNAME} /tmp/pgdump.pg && \
psql -U postgres -c "ALTER USER ${DBUSER} WITH NOSUPERUSER;" && \
rm -rf /tmp/pgdump.pg
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pg_isready -U postgres -d launchpad
where the wait-for-pg-isready.sh is:
#!/bin/bash
set -e
get_non_lo_ip() {
local _ip _non_lo_ip _line _nl=$'\n'
while IFS=$': \t' read -a _line ;do
[ -z "${_line%inet}" ] &&
_ip=${_line[${#_line[1]}>4?1:2]} &&
[ "${_ip#127.0.0.1}" ] && _non_lo_ip=$_ip
done< <(LANG=C /sbin/ifconfig)
printf ${1+-v} $1 "%s${_nl:0:$[${#1}>0?0:1]}" $_non_lo_ip
}
get_non_lo_ip NON_LO_IP
until pg_isready -h $NON_LO_IP -U "postgres" -d "launchpad"; do
>&2 echo "Postgres is not ready - sleeping..."
sleep 4
done
>&2 echo "Postgres is up - you can execute commands now"
For the two "unsure steps":
start the PG server
nohup bash -c "docker-entrypoint.sh postgres &" can take care of it
kill the PG server
It's not really necessary
The above scripts together with a more detailed README are available at https://github.com/cobrainer/pg-docker-with-restored-db
You can utilise volumes.
The postgres image has an enviroment variable you could set with: PGDATA
See docs: https://hub.docker.com/_/postgres/
You could then point a pre created volume with the exact db data that you require and pass this as an argument to the image.
https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume
Alternate solution can also be found here: Starting and populating a Postgres container in Docker
A general approach to this that should work for any system that you want to initialize I remember using on other projects is:
Instead of trying to do do this during the build, use Docker Compose dependencies so that you end up with:
your db service that fires up the database without any initialization that requires it to be live
a db-init service that:
takes a dependency on db
waits for the database to come up using say dockerize
then initializes the database while maintaining idempotency (e.g. using schema migration)
and exits
your application services that now depend on db-init instead of db

root cant perform listCollections command on db

I have credentials for root user and I am using those credentials to automate db backup. Main aim is to create prototype for Automated DB backup and, for simplicity, I am using root. The script (I borrowed from article) looks like as follows:
#!/bin/bash
#Force file syncronization and lock writes
mongo admin -u "root" -p "root" --eval "printjson(db.fsyncLock())"
MONGODUMP_PATH="/usr/bin/mongodump"
MONGO_DATABASE="mydb" #replace with your database name
TIMESTAMP=`date +%F-%H%M`
S3_BUCKET_NAME="mydb" #replace with your bucket name on Amazon S3
S3_BUCKET_PATH="backup/mongo"
# Create backup
$MONGODUMP_PATH -d $MONGO_DATABASE
# Add timestamp to backup
mv dump mongodb-$HOSTNAME-$TIMESTAMP
tar cf mongodb-$HOSTNAME-$TIMESTAMP.tar mongodb-$HOSTNAME-$TIMESTAMP
# Upload to S3
s3cmd put mongodb-$HOSTNAME-$TIMESTAMP.tar s3://$S3_BUCKET_NAME/$S3_BUCKET_PATH/mongodb-$HOSTNAME-$TIMESTAMP.tar
#Unlock database writes
mongo admin -u "root" -p "root" --eval "printjson(db.fsyncUnlock())"
#Delete local files
#rm -rf mongodb-*
I am getting following error:
Failed: error getting collections for database mydb: error running
listCollections. Database: mydb Err: not authorized on mydb to
execute command { listCollections: 1, cursor: {} }
Isnt root has all the access over all the databases? I am bit scared that I might run into situation where I am thinking to supersede something with root but It doesnt have the permission. This is the root cause of posting question. I want to avoid surprises like this in the future.
The error mentioned above was misleading for me. But my script was wrong or incomplete.
If you look at following line in my script:
$MONGODUMP_PATH -d $MONGO_DATABASE
I am not providing any user in Mongodump command and hence the error message. If I rewrite that line something as follows:
$MONGODUMP_PATH -d $MONGO_DATABASE --authenticationDatabase "admin" -u "dbowner" -p "pwd"
then error goes away.

Restoring .dump file - "Permission Denied"

I am setting up a website and am having some trouble restoring a database .dump file. I am using centos7, selinux, postgresql 9.4, and apache2.
This is my pg_hba.conf file.
This is the command I am trying to move the dump:
psql --single-transaction -U postgres db_name < dump_location
When I do this, I get the error:
Permission denied.
Am I missing something or is there someway I should alter my settings? Let me know if you need more information.
Thank you!
The operating system user you are running your shell as does not have permission to read the path dump_location.
Note that this is not necessarily the operating system user you run psql as. In a statement like:
sudo -u postgres psql mydb < /some/path
then /some/path is read as the current user, before sudo, not as user postgres, because it's the shell that performs the input redirection, not psql.
If, in the above example, you wanted to read the file as user postgres you would:
sudo -u postgres psql -f /some/path mydb
That instructs psql to open and read /some/path when it's started.
Just make sure that you are using correct database user and you have at least read permission on the dump file.
"psql -d -U postgres -f "
will work.

Is there a simple way to export the data from a meteor deployed app?

Is there a simple way to export the data from a meteor deployed app?
So, for example, if I had deployed an app named test.meteor.com...
How could I easily download the data that has been collected by that app - so that I could run it locally with data from the deployed app?
To get the URL for your deployed site at meteor.com use the command (you may need to provide your site password if you password protected it):
meteor mongo --url YOURSITE.meteor.com
Which will return something like :
mongodb://client:PASSWORD#sky.member1.mongolayer.com:27017/YOURSITE_meteor_com
Which you can give to a program like mongodump
mongodump -u client -h sky.member1.mongolayer.com:27017 -d YOURSITE_meteor_com\
-p PASSWORD
The password is only good for one minute. For usage:
$ meteor --help mongo
And here's how to do the opposite: (uploading your local monogo db to meteor)
https://gist.github.com/IslamMagdy/5519514
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3002 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h c0.meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -p 'password' folder/
Based on Kasper Souren's solution I created an updated script that works with current versions of Meteor and also works when you protect your remote Meteor app with a password.
Please create the following script parse-mongo-url.coffee:
spawn = require('child_process').spawn
mongo = spawn 'meteor', ['mongo', '--url', 'YOURPROJECT.meteor.com'], stdio: [process.stdin, 'pipe', process.stderr]
mongo.stdout.on 'data', (data) ->
data = data.toString()
m = data.match /mongodb:\/\/([^:]+):([^#]+)#([^:]+):27017\/([^\/]+)/
if m?
process.stdout.write "-u #{m[1]} -p #{m[2]} -h #{m[3]} -d #{m[4]}"
else
if data == 'Password: '
process.stderr.write data
Then execute it like this in a *nix shell:
mongodump `coffee parse-mongo-url.coffee`
I have created a tool, mmongo, that wraps all the Mongo DB client shell commands for convenient use on a Meteor database. If you use npm (Node Package Manager), you can install it with:
npm install -g mmongo
Otherwise, see README.
To back up your Meteor database, you can now do:
mmongo test.meteor.com dump
To upload it to your local development meteor would be:
mmongo restore dump/test_meteor_com
And if you accidentally delete your production database:
mmongo test.meteor.com --eval 'db.dropDatabase()' # whoops!
You can easily restore it:
mmongo test.meteor.com restore dump/test_meteor_com
If you'd rather export a collection (say tasks) to something readable:
mmongo test.meteor.com export -c tasks -o tasks.json
Then you can open up tasks.json in your text editor, do some changes and insert the changes with:
mmongo test.meteor.com import tasks.json -c tasks --upsert
Github, NPM
I suppose your data is in a mongodb database, so if that's the case, the question is more mongo-related than meteor. You may take a look at mongoexport and mongoimport command line tools.
Edit (for example):
mongoexport -h flame.mongohq.com:12345 -u my_user -p my_pwd -d my_db -c my_coll
You need to install mongodb on your computer to have this command line tool, and obviously you need your mongodb informations. In the above example, I connect to MongoHQ (flame.mongohq.com is the host, '12345' is the port of your mongo server), but I don't know which Mongo host is actually used by the meteor hosting. If you tried the Meteor examples (TODOs, Leaderboard, etc.) locally, chances are you already installed Mongo, since it uses a local server by default.
Here is another solution in bash
#! /bin/bash
# inspired by http://stackoverflow.com/questions/11353547/bash-string-extraction-manipulation
# http://www.davidpashley.com/articles/writing-robust-shell-scripts/
set -o nounset
set -o errexit
set -o pipefail
set -x
# stackoverflow.com/questions/7216358/date-command-on-os-x-doesnt-have-iso-8601-i-option
function nowString {
date -u +"%Y-%m-%dT%H:%M:%SZ"
}
NOW=$(nowString)
# prod_url="mongodb://...:...#...:.../..."
prod_pattern="mongodb://([^:]+):([^#]+)#([^:]+):([^/]+)/(.*)"
prod_url=$(meteor mongo katapoolt --url | tr -d '\n')
[[ ${prod_url} =~ ${prod_pattern} ]]
PROD_USER="${BASH_REMATCH[1]}"
PROD_PASSWORD="${BASH_REMATCH[2]}"
PROD_HOST="${BASH_REMATCH[3]}"
PROD_PORT="${BASH_REMATCH[4]}"
PROD_DB="${BASH_REMATCH[5]}"
PROD_DUMP_DIR=dumps/${NOW}
mkdir -p dumps
# local_url="mongodb://...:.../..."
local_pattern="mongodb://([^:]+):([^/]+)/(.*)"
local_url=$(meteor mongo --url | tr -d '\n')
[[ ${local_url} =~ ${local_pattern} ]]
LOCAL_HOST="${BASH_REMATCH[1]}"
LOCAL_PORT="${BASH_REMATCH[2]}"
LOCAL_DB="${BASH_REMATCH[3]}"
mongodump --host ${PROD_HOST} --port ${PROD_PORT} --username ${PROD_USER} --password ${PROD_PASSWORD} --db ${PROD_DB} --out ${PROD_DUMP_DIR}
mongorestore --port ${LOCAL_PORT} --host ${LOCAL_HOST} --db ${LOCAL_DB} ${PROD_DUMP_DIR}/${PROD_DB}
meteor-backup is by far the easiest way to do this.
sudo npm install -g meteor-db-utils
meteor-backup [domain] [collection...]
As of March 2015 you still need to specify all collections you want to fetch though (until this issue is resolved).
Stuff from the past below
I'm doing
mongodump $(meteor mongo -U example.meteor.com | coffee url2args.cfee)
together with this little coffeescript, with a mangled extension in order not to confuse Meteor, url2args.cfee:
stdin = process.openStdin()
stdin.setEncoding 'utf8'
stdin.on 'data', (input) ->
m = input.match /mongodb:\/\/(\w+):((\w+-)+\w+)#((\w+\.)+\w+):27017\/(\w+)/
console.log "-u #{m[1]} -h #{m[4]} -p #{m[2]} -d #{m[6]}"
(it would be nicer if meteor mongo -U --mongodumpoptions would give these options, or if mongodump would accept the mongo:// URL)
# How to upload local db to meteor:
# -h = host, -d = database name, -o = dump folder name
mongodump -h 127.0.0.1:3001 -d meteor -o meteor
# get meteor db url, username, and password
meteor mongo --url myapp.meteor.com
# -h = host, -d = database name (app domain), -p = password, folder = the path to the dumped db
mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p 'password' folder/
While uploading local db to remote db, having an assertion Exception
shubham#shubham-PC:$ mongorestore -u client -h http://production-db-a2.meteor.io:27017 -d myapp_meteor_com -p my_password local/
2015-04-22T16:37:38.504+0530 Assertion failure _setName.size() src/mongo/client/dbclientinterface.h 219
2015-04-22T16:37:38.506+0530 0xdcc299 0xd6c7c8 0xd4bfd2 0x663468 0x65d82e 0x605f98 0x606442 0x7f5d102f8ec5 0x60af41
mongorestore(_ZN5mongo15printStackTraceERSo+0x39) [0xdcc299]
mongorestore(_ZN5mongo10logContextEPKc+0x198) [0xd6c7c8]
mongorestore(_ZN5mongo12verifyFailedEPKcS1_j+0x102) [0xd4bfd2]
mongorestore(_ZN5mongo16ConnectionStringC2ENS0_14ConnectionTypeERKSsS3_+0x1c8) [0x663468]
mongorestore(_ZN5mongo16ConnectionString5parseERKSsRSs+0x1ce) [0x65d82e]
mongorestore(_ZN5mongo4Tool4mainEiPPcS2_+0x2c8) [0x605f98]
mongorestore(main+0x42) [0x606442]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f5d102f8ec5]
mongorestore() [0x60af41]
terminate called after throwing an instance of 'mongo::AssertionException'
what(): assertion src/mongo/client/dbclientinterface.h:219
Aborted (core dumped)
I made this simple Rakefile to copy the live db to local.
To restore the live db to my local machine I just do...
rake copy_live_db
Replace myapp with the name of your meteor.com - e.g myapp.meteor.com.
require 'rubygems'
require 'open-uri'
desc "Backup the live db to local ./dump folder"
task :backup_live_db do
uri = `meteor mongo myapp --url`
pass = uri.match(/client:([^#]+)#/)[1]
puts "Using live db password: #{pass}"
`mongodump -h meteor.m0.mongolayer.com:27017 -d myapp_meteor_com -u client -p #{pass}`
end
desc "Copy live database to local"
task :copy_live_db => :backup_live_db do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
desc "Restore last backup"
task :restore do
server = `meteor mongo --url`
uri = URI.parse(server)
`mongorestore --host #{uri.host} --port #{uri.port} --db meteor --drop dump/myapp_meteor_com/`
end
To use an existing local mongodb database on your meteor deploy myAppName site, you need to dump, then restore the mongodb.
Follow the instructions above to mongodump (remember the path) and then run the following to generate your 'mongorestore' (replaces the second step and copy/pasting):
CMD=meteor mongo -U myAppName.meteor.com | tail -1 | sed 's_mongodb://\([a-z0-9\-]*\):\([a-f0-9\-]*\)#\(.*\)/\(.*\)_mongorestore -u \1 -p \2 -h \3 -d \4_'
then
$CMD /path/to/dump
From Can mongorestore take a single url argument instead of separate arguments?
I think you can use a remotely mounted file system via sshfs and then rsync to synchronize the mongodb's folder itself or your entire Meteor folder I believe as well. This is like doing an incremental backup and potentially more efficient.
It's possible to use the same solution for sending changes of your code, etc. so why not get you database changes back at the same time too?! (killing 2 birds with 1 stone)
Here is a simple bash script that lets you dump your database from meteor.com hosted sites.
#!/bin/bash
site="rankz.meteor.com"
name="$(meteor mongo --url $site)"
echo $name
IFS='#' read -a mongoString <<< "$name"
echo "HEAD: ${mongoString[0]}"
echo "TAIL: ${mongoString[1]}"
IFS=':' read -a pwd <<< "${mongoString[0]}"
echo "${pwd[1]}"
echo "${pwd[1]:2}"
echo "${pwd[2]}"
IFS='/' read -a site <<< "${mongoString[1]}"
echo "${site[0]}"
echo "${site[1]}"
mongodump -u ${pwd[1]:2} -h ${site[0]} -d ${site[1]}\
-p ${pwd[2]}