It might be a dead simple question yet I still wanted to ask. I've created a Node.js application and deployed it on Heroku. I've also set up the database connection without having any trouble as well.
However, I cannot get the load the local data in my MongoDB to MongoLab I use on heroku. I've searched google and could not find a useful solution so I ended up trying these commands;
mongodump
And:
mongorestore -h mydburl:mydbport -d mydbname -u myusername -p mypassword --db Collect.1
Now when I run the command mongorestore, I received the error;
ERROR: multiple occurrences
Import BSON files into MongoDB.
When I take a look at the DB file for MongoDB I've specified and used during the local development, I see that there are files Collect.0, Collect.1 and Collect.ns. Now I know that my db name is 'Collect' since when I use the shell I always type `use Collect`. So I specified the db as Collect.1 in command line but I still receive the same errors. Should I remove all the other collect files or there is another way around?
You can't use 'mongorestore' against the raw database files. 'mongorestore' is meant to work off of a dump file generated by 'mongodump'. First us 'mongodump' to dump your local database and then use 'mongorestore' to restore that dump file.
If you go to the Tools tab in the MongoLab UI for your database, and click 'Import / Export' you can see an example of each command with the correct params for your database.
Email us at support#mongolab.com if you continue to have trouble.
-will
This can done by two steps.
1.Dump the database
mongodump -d mylocal_db_name -o dump/
2.Restore the database
mongorestore -h xyz.mongolab.com:12345 -d remote_db_name -u username -p password dump/mylocal_db_name/
Related
We use Postgres and Flask for our website, and we use the production database dump locally pretty often. To get a fresh dump, I use a remote desktop connection (RDC) to connect to pgAdmin then use RDC again to copy .bak file from server and save it locally. Likewise, I use a local instance of pdAdmin to restore the database state from the backup.
My manager asked me to automate this process to use production database each time when a local Flask instance is launched. How can I do that?
You could write a shell script that dumps the database to a local file using pg_dump, then use pg_restore to build a new local database from that dump. You could probably even just pass the output from pg_dump to pg_restore... something like
pg_dump --host <remote-database-host> --dbname <remote-database-name> --username <remote-username> > pg_restore --host <local-database-host> --username <local-username>
To get your password into pg_dump / pg_restore you'll probably want to use a .pgpass file, as described here: How to pass in password to pg_dump?
If you want this to happen automatically when you launch a Flask instance locally, you could call the shell script from your initialization code using a subprocess call if a LOCAL_INSTANCE environment variable is set, or something along those lines
I am setting up a Mongo DB database on mongolab. I have 2 small documents in a single collection in my local db server, that I want to upload. I used mongodump to export that to a collection.bson and collection.metadata.json file
When I use mongorestore I get an error:
Failed:database.collection: error restoring from /tmp/mongodump/database/collection.bson: insertion error: EOF
collection.bson is less than 2KB. My research shows that this error shows up when your database is huge, but not very small. I can't get anything for my situation. The common solution of using --batchSize 1 results in the same issue.
After I run mongorestore the collection does exist on the remote, but it is empty (0 documents)
How can I get my tiny local db up on my remote server (on MongoLab).
I am locally running mongo3.4, but my MongoLab instance is 3.2.13. I'm wondering if the problem is the mismatch of versions?
mongorestore command:
mongorestore -h ds111882.mlab.com:11882 -d database -u username -p password /tmp/mongodump/database
mongodump
mongodump -d database -o /tmp/mongodump
Additional info: I imported the mongodump I created into another local database (also running 3.4) and it worked just fine, but restoring to the mongolab server causes the error.
Also a bsondump of the generated collection.bson file creates a json file with the correct 2 documents.
Found the answer, it is an issue with versions. My schema was using the Decimal128 type, which isn't supported in 3.2. Downgrading to Double (didn't need Decimal128 anyway) fixed the issue.
I am currently trying to import a group of JSON files containing data into my mongo database hosted on IBM Bluemix/Compose.
I have a script that runs through the files creating and then running a mongoimport command to import the files into the database, this works great against my local database(and indeed occasionally against the Compose database) however most of the time I get the following error -
2017-05-09T14:59:02.508+0100 Failed: error connecting to db server:
SSL errors: x509 certificate routines:X509_STORE_add_cert:cert
already in hash table x509 certificate
2017-05-09T14:59:02.508+0100 imported 0 documents
My mongoimport command looks like this -
mongoimport --batchSize 100 --ssl --sslAllowInvalidCertificates --host *censored* --collection Personnel --file data/TestData/Personnel_WICS.json -u admin -p *censored* -d MY_DB --authenticationDatabase admin
Is this a mongoimport error? Perhaps an issue with Compose? Or am I doing something incorrectly with the command?
I should note that the files I am importing range in size from 3mb-100mb, but even reducing the larger file sizes down by splitting them up does not seem to help.
My import script runs one import command immediately following the completion of the previous one, is there maybe some issue in running several back to back imports like this?
For anyone finding this in future - it looks like this may have been caused by a mismatch in mongo versions between the machine I'm running the mongoimport command from and the mongo database hosted in compose.
Compose DB Version: 3.2
Build server machine(running mongoimport): 3.4
Downgrading the build server version has resolved the issue.
I have two apps, with same tables. One of app collecting data from web. I want to send the datas to my second(web app)'s app database.
With the code below, I have created the file with datas:
pg_dump -U username -t public."table_name" -d database name --inserts > table_name.sql
The problem is that I just want to insert data's which does not exist in second database.
If I try the code below, I get a lot of already exists errors:
psql -U username second_database_name < table_name.sql
One of error:
multiple primary keys for table "table_name" are not allowed
Another one:
relation "table_name_attribute_442....c74_uniq" already exists
--clean , --if-exists ... What should I do?
The way I did it, was to do a pg_dump that creates a compressed archive suitable for use with pg_restore, which has the needed flags to allow the data to be imported without throwing errors.
For example:
pg_dump -Fc -h 127.0.0.1 {db_name_here} > {dump_file_name_here}
The "-Fc" gets you the file-type that pg_restore wants; it will reject a dump made without those magic letters.
Now you can restore the file with:
pg_restore -O -h 127.0.0.1 --clean --disable-triggers -d {target_db_name} {dump_file_name_here}
Voila - the data is now in the target_db.
If the 'psql' command has the equivalent flags, I don't know what they are, and did not find them in SO searching. But, hopefully, this provides a DB dump/restore that 'just works' for those who need to get back to using the DB, instead of fiddling with trying to get a simple dump/restore to happen with the expected behavior.
Also note, if you do not specify ...
-h 127.0.0.1
... it will go looking for a unix-socket, which may or may not be configured correctly. Chances are, your use-case for the DB addresses the db that way, so you never manually-configured the unix-socket to match whatever the command defaults to trying to find, which is different than how setup sets the default socket (of course, because things "just working," so you can "just use it" is just not possible).
When developing, I need to pull the latest database so I know I'm working with the latest data. However, we keep a table full of Archives that I don't need to bother downloading because it's a very large table.
I know pg_dump allows for custom parameters that will let you exclude a certain table from being dumped.
Without doing anything crazy like having 2 databases, 1 for data and 1 for archives, is there any way to download everything BUT the archives table from Heroku?
I still need it to keep backups of the archives table, but I don't want to be downloading it. Can I just do a pg_dump when needed that is seperate from the backups?
I know it's a long shot, but any suggestions would be greatly appreciated.
You can't add any custom pg_dump options when using heroku pg:backups capture. This command actually calls an undocumented Heroku Postgres API and it doesn't pass any parameters (see here for the code if you are curious).
What you can do is run your own pg_dump dump command that points to the Heroku Postgres instance.
Get the connection info with pg:credentials where DATABASE_URL can also be the the database color if you have more than one database attached to the app:
> heroku pg:credentials DATABASE_URL --app app_name
Connection info string:
"dbname=zzxcasdqwe host=ec2-1-1-1-1.compute-1.amazonaws.com port=1111 user=asdfasdf password=qwertyqwerty sslmode=require"
Connection URL:
postgres://asdfasdf:qwertyqwerty#ec2-1-1-1-1.compute-1.amazonaws.com:1111/zzxcasdqwe
Take either the the connection info string or the connection url and include that as the first argument to pg_dump and add your custom options
pg_dump "dbname=zzxcasdqwe host=ec2-1-1-1-1.compute-1.amazonaws.com port=1111 user=asdfasdf password=qwertyqwerty sslmode=require"\
-n schema -t table -O -x -Fc -f dump.out
# OR
pg_dump postgres://asdfasdf:qwertyqwerty#ec2-1-1-1-1.compute-1.amazonaws.com:1111/zzxcasdqwe \
-n schema -t table -O -x -Fc -f dump.out
I also co-wrote a Heroku plugin (parse_db_url) that will parse DATABASE_URL's into other formats like pg_dump, pg_restore, pgpass etc. I find it useful when dealing with several different Heroku databases.