import data into openshift mongoDb - mongodb

I created a java application on openshift with the mongoDb cartridge.
My application runs fine, both locally on jboss AS7 as on openshift.
So far so good.
Now I would like to import an csv into the mongoDb on the openshift cloud.
The command is fairly simple:
mongoimport -d dbName -c collectionName --type csv data.csv --headerline
This works fine locally, and I know how to connect to the openshift-shell and remote mongo-db. But my question is: how can I use a locally stored file (data.csv) when executing this commando in a ssh-shell.
I found this on the openshift forum, but I don't realy know what this tmp directory is and how to use it.
I work on windows, so I use Cygwin as a shell-substitute.
Thanks for any help

The tmp directory is shorthand for /tmp. On Linux, it's a directory that is cleaned out whenever you restart the computer, so it's a good place for temporary files.
So, you could do something like:
$ rsync data.csv openshiftUsername#openshiftHostname:/tmp
$ ssh openshiftUsername#openshiftHostname
$ mongoimport -d dbName -c collectionName --type csv /tmp/data.csv --headerline

This is what I needed in October 2014:
mongoimport --host $OPENSHIFT_MONGODB_DB_HOST --port $OPENSHIFT_MONGODB_DB_PORT -u admin -p 123456789 -d dbName -c users /tmp/db.json
Note that I used a json file instead of csv

When using Openshift you must use the environment variables to ensure your values are always correct. Click here to read more about Openshift Envrionment variables
SSH into your openshift server then run (remember to change the bold bits in the command to match your values):
mongoimport --headerline --type csv \
--host $OPENSHIFT_NOSQL_DB_HOST \
--port $OPENSHIFT_NOSQL_DB_PORT \
--db **your db name** \
--collection **your collection name** \
--username $OPENSHIFT_NOSQL_DB_USERNAME \
--password $OPENSHIFT_NOSQL_DB_PASSWORD \
--file ~/**your app name**/data/**your csv file name**
NOTE
When importing csv files using mongoimport the data is saved as strings and numbers only. It will not save arrays or objects. If you have arrays or object to be saved you must first convert your csv file into a proper json file and then mongoimport the json file.

I installed RockMongo on my openshift instance to manage the mongodb.
It's a nice userinterface, a bit like phpMyAdmin for mysql

Users who wish to use mongorestore the following worked for me:
First copy your dump using scp to the data dir on openshift:
scp yourfile.bson yourhex#yourappname.rhcloud.com:app-root/data
rhc ssh into your app and cd to the app-root/data folder.
mongorestore --host $OPENSHIFT_MONGODB_DB_HOST
--port $OPENSHIFT_MONGODB_DB_PORT
--username $OPENSHIFT_MONGODB_DB_USERNAME
--password $OPENSHIFT_MONGODB_DB_PASSWORD
-d yourdb
-c yourcollection
yourfilename.bson --drop

Similar to Simon's answer, but this is how I imported .json to the database:
mongoimport --host $OPENSHIFT_MONGODB_DB_HOST -u admin -p 123456 --db dbname --collection grades < grades.json

Related

MongoDb: How to import dump data from .gz file?

I want to import dump data from my .gz file.
Location of file is home/Alex/Documents/Abc/dump.gz and the name of db is "Alex".
I have tried mongorestore --gzip --db "Alex" /home/Alex/Documents/Abc/dump.gz
But it shows error:
2018-10-31T12:54:58.359+0530 the --db and --collection args should
only be used when restoring from a BSON file. Other uses are
deprecated and will not exist in the future; use --nsInclude instead
2018-10-31T12:54:58.359+0530 Failed: file
/home/Alex/Documents/Abc/dump.gz does not have .bson extension.
How can I import it?
Dump command:
mongodump --host localhost:27017 --gzip --db Alex --out ./testSO
Restore Command:
mongorestore --host localhost:27017 --gzip --db Alex ./testSO/Alex
Works perfectly!
While using archive:
Dump command:
mongodump --host localhost:27017 --archive=dump.gz --gzip --db Alex
Restore Command:
mongorestore --host localhost:27017 --gzip --archive=dump.gz --db Alex
Note:- While using archive you need to stick with the database name.
Different database name or collection name is not supported. For more info.
This is what worked for me in the latest versions (100.5.1) of mongodump.
mongorestore --uri=<CONNECTION_URI> --gzip --archive=<ARCHIVE_NAME> --nsFrom "<SOURCE_DB_NAME>.*" --nsTo "<DEST_DB_NAME>.*"
Unpack .tgz files and restore the DB
tar zxvf fileNameHere.tgz
mongorestore --port 27017 -u="username" -p="password" --authenticationDatabase admin /bacup_path
mongorestore doesn't find the BSON files inside the gzip file because the mongodump was made with different paths than the restore one.
To solve the problem, the fastest and safest way is to extract the gzip file and go to the upper folder containing the json and bson files for run the mongorestore.
For example, the dump.gz file was made in such a way that the backup are saved within the data/backup/mongo/dump/ path folders
Extracting the dump.gz file with command tar -xvf dump.gz you will find a folder named data with the subfolders data/backup/mongo/dump/ inside (inside the dump folder are present all backup file with json and bson extension, these files represent databases and collections, etc.)
Go to the higher folder, that containing the dump folder eg. cd data/backup/mongo/
Now you can run the restore command
mongorestore --authenticationDatabase admin dump/
Where dump/ is the folder that containing the backup files.
You may need to use the arguments -h to point the server host (eg. localhost) and -u followed by the username enabled to make the restore operations (eg. root)

How to import csv in meteor mongo

I'm facing a problem in importing a CSV file to my db. I tried this command:
mongoimport --db meteor -h localhost:3001 -c TollPlaza -d meteor --headerline --type csv --file TollPlaza.csv
I referred to this question but am still having a problem.
Try this query :
mongoimport --db meteor --host localhost --port 3001 -c TollPlaza -d
meteor --headerline --type csv --file TollPlaza.csv
Maybe seperate host and port
The query is perfectly fine.
Follow the instructions
run the meteor project
Open a one more command prompt in same location
make sure that csv file present in the same directory.
run the query.
check the DB (show collections).

Restoring single collection in an existing mongodb

I'm failing miserably to be able to restore a single collection into an existing database.
I'm running Ubuntu 14.04 with mongo version 2.6.7
There is a dump/mydbname/contents.bson based off my home directory.
If I run
mongorestore --collection contents --db mydbname
Then I get:
connected to: 127.0.0.1
don't know what to do with file [dump]
If I add in the path
mongorestore --collection contents --db mydbname --dbpath dump/mydbname
Then I get
If you are running a mongod on the same path you should connect to that instead of direct data file access
I've tried various other combinations, options, etc. and just can't puzzle it out, so I'm coming to the community for help!
If you want to restore a single collection then you have to specifiy the dump file of the collection. The dump file of the collection is found in the 'dump/dbname/' folder. So assuming your dump folder is in your current working directory, the command would go something like -
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson
I think this is now done with the --nsInclude option:
mongorestore --nsInclude test.purchaseorders dump/
dump/ is the folder with your mongodump data, test is the db, and purchaseorders is the collection.
https://docs.mongodb.com/manual/reference/program/mongorestore/
Steps to restore specific collection in the mongodb.
1) Go to the directory where your dump folder exists.
2) Execute following command by modifying according to your db name and your collection name.
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson
If you get Failed: yourdbname.collection.name: error creating indexes for collection.name: createIndex error: The field 'safe' is not valid for an index specification error, then you can use following command:
mongorestore --db mydbname --collection mycollection dump/mydbname/mycollection.bson --noIndexRestore
If you are restoring multiple collections, you can use a loop:
for file in "$HOME/mongodump/dev/<your-db>/"* ; do
if [[ "$file" != "*metadata*" && "$file" != "system.*" && "$file" != "locks.*" ]]; then
file="$(basename "$file”)"
mongorestore \
--db cdt_dev \
--collection "${file%.*}" \ # filename w/o extension
--host "<your-host>" \
--authenticationDatabase "<your-auth-db>" \
-u "user" \
-p "pwd" \
"$HOME/mongodump/dev/<your-db>/$file"
fi;
done

how to mongoimport data to deployed meteor app?

UPDATE: this post applied to meteor.com free hosting, which has been shutdown and replaced with Galaxy, a paid Meteor hosting service
I'm using this command
C:\kanjifinder>meteor mongo --url kanjifinder.meteor.com
to get access credentials to my deployed mongo app, but I can't get mongoimport to work with the credentials. I think I just don't exactly understand which part is the username, password and client. Could you break it down for me?
result from server (I modified it to obfuscate the real values):
mongodb://client:e63aaade-xxxx-yyyy-93e4-de0c1b80416f#meteor.m0.mongolayer.com:27017/kanjifinder_meteor_com
my mongoimport attempt (fails authentication):
C:\mongodb\bin>mongoimport -h meteor.m0.mongolayer.com:27017 -u client -p e63aaade-xxxx-yyyy-93e4-de0c1b80416f --db meteor --collection kanji --type csv --file c:\kanjifinder\kanjifinder.csv --headerline
OK got it. This helped:
http://docs.mongodb.org/manual/reference/connection-string/
mongoimport --host meteor.m0.mongolayer.com --port 27017 --username client --password e63aaade-xxxx-yyyy-93e4-de0c1b80416f --db kanjifinder_meteor_com --collection kanji --type csv --file c:\kanjifinder\kanjifinder.csv --headerline
Using mongodump and mongorestore also works:
Dump data from existing mongodb (mongodb url: mongodb://USER:PASSWORD#DBHOST/DBNAME)
mongodump -h DBHOST -d DBNAME -u USER -p PASSWORD
This will create a "dump" directory, with all the data going to dump/DBNAME.
Get the mongodb url for the deployed meteor app (i.e. www.mymeteorapp.com)
meteor mongo --url METEOR_APP_URL
Note: the PASSWORD expires every min.
Upload the db dump data to the meteor app (using an example meteor db url)
mongorestore -u client -p dcc56e04-a563-4147-eff4-5ae7c1253c9b -h production-db-b2.meteor.io:27017 -db www_mymeteorapp_com dump/DBNAME/
All the data should get transferred!
If you get auth_failed error message your mongoimport version is too different from what's being used in meteor.com. So you need to upgrade. For ubuntu see https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/#install-the-latest-stable-version-of-mongodb
#!/bin/sh
# Script to import csvfile to meteor application deployed to free meteor.com hosting.
# Make sure your versions of mongo match with the metor.com mongo versions.
# As Jan 2016 it seems to be 3.x something. Tested with mongoimport 3.12.
if [ $# -eq 0 ]
then
echo "usage: $0 xxx.meteor.com collection filename.csv"
exit 1
fi
URL=$1
COLLECTION=$2
FILE=$3
echo Connecting to $URL, please stand by.... collection=$COLLECTION file=$FILE
PUPMS=`meteor mongo --url $URL | sed 's/mongodb:\/\// -u /' | sed 's/:/ -p /' | sed 's/#/ -h /' | sed 's/\// -d /'`
mongoimport -v $PUPMS --type csv --headerline --collection $COLLECTION --file $FILE

How to import dumped Mongodb?

Dumped a MongoDB successfully:
$ mongodump -h ourhost.com:portnumber -d db_name01 -u username -p
I need to import or export it to a testserver and have struggle with it, please help me figure out.
I tried some ways:
$ mongoimport -h host.com:port -c dbname -d dbname_test -u username -p
connected to host.
Password: ...
Gives this error:
assertion: 9997 auth failed: { errmsg: "auth fails", ok: 0.0 }
$ mongoimport -h host.com:port -d dbname_test -u username -p
Gives this error:
no collection specified!
How to specify which collection to use? What should I use for -d? What I'd like to upload or what I want to use as test out there? I would like to import the full DB not only collection of it.
The counterpart to mongodump is mongorestore (and the counterpart to mongoimport is mongoexport) -- the major difference is in the format of the files created and understood by the tools (dump and restore read and write BSON files; export and import deal with text file formats: JSON, CSV, TSV.
If you've already run mongodump, you should have a directory named dump, with a subdirectory for each database that was dumped, and a file in those directories for each collection. You can then restore this with a command like:
mongorestore -h host.com:port -d dbname_test -u username -p password dump/dbname/
Assuming that you want to put the contents of the database dbname into a new database called dbname_test.
You may have to specify the authentication database
mongoimport -h localhost:27017 --authenticationDatabase admin -u user -p -d database -c collection --type csv --headerline --file awesomedata.csv
For anyone else might reach this question after all these years (like I did), and if you are using
a dump which was created using mongodump
and trying to restore from a dump directory
and going to be using the default port 27017
All you got to do is,
mongorestore dump/
Refer to the mongorestore doc for more info. cheers!
When you do a mongodump it will dump in a binary format. You need to use mongorestore to "import" this data.
Mongoimport is for importing data that was exported using mongoexport