With the command:
psql --dbname=mattermost --username=mmuser --password
then if I place the password it succeeds, but if I write:
psql --host=localhost --dbname=mattermost --username=mmuser --password
then the same password gives authentication fails
How can I resolve this?
So in this one try using
psql --host=127.0.0.1 --dbname=mattermost --username=mmuser --password
check the pg_hba.conf for further info. So if below line in your pg_hba.conf file
host all all localhost trust
then it will start working with localhost also
In Ubuntu 16.04 I created second postgres database cluster, called cmg, with a local user as the admin user:
pg_create -u "local_username" -g "local_usergroup" -d /path/to/data/dir 9.5 cmg
The cluster was started with:
pg_ctrlcluster 9.5 cmg start
which ran successfully (pg_lsclusters show both are online)
The problem is I cannot connect to the cluster using psql as is normally done.
I tried using:
psql -h 127.0.0.1 -w -p5433 -U local_username
which fails with:
psql: fe_sendauth: no password supplied"
Is there any way to connect to the specific cluster?
use psql -h your_socket_dir -p5433 -U postgres to connect locally (uses peer auth by default - thus high chahce to login wothout password)
once logged in - set up password (create user if needed) and use it connecting remotely
psql -h 127.0.0.1 -p5433 -U local_username
in your connect string you had -w which is never ask for a password https://www.postgresql.org/docs/current/static/app-psql.html which would by default work only for local connections
I think the default pg_hba.conf when you start up a new cluster expects you to authenticate with peer connections, so you need to change user to your local user before connecting
[root#server~]# su - local_username
>> Enter password:
> password
[local_username#server~]# psql -h 127.0.0.1 -p 5433
You can check your pg_hba.conf file in /path/to/data/dir/pg_hba.conf to see how it expects you to authenticate.
Alternatively, if you cannot get access as your 'local_username' then instead su to postgres user in the instructions above and it should work
i am trying to import my database to my mongoLab database, but it keep showing the following error:
2016-10-19T21:05:49.183+0800 Failed: error connecting to db server: server returned error on SASL authentication step: Authentication failed.
-bash: vd: command not found
This is how I ran my command:
mongorestore -h 243253423.mlab.com:2131242 -d meteor -u <Username> -p <Password> /Users/directory/desktop/mongo/dump
I ran across this error message and I am not sure this was your specific problem but this ended up working for me
mongorestore -d production-db \
-u myusername -p MyPassword \
--authenticationDatabase admin --host mydomain.com \
~/tmp/mongodump/local-production-db/
(be sure to check firewall sudo ufw status and net.bindIp sudo nano /etc/mongod.conf to confirm that you have access and your mongo process is listening on an external port)
I need to export some info from my MongoDB (hosted on MongoHQ // Compose.io) instance. I'm following these instructions, which give all the examples I need and seem to correspond with the official mongo docs. Here's the command i'm running:
mongoexport -h kahana.mongohq.com:12345/my_db_name -u username -p password -d my_db -c usercollection -f "firstName,lastName,macIdNum,iclass" --csv
and the output:
2014-09-17T21:58:12.806-0500 starting new replica set monitor for replica set kahana.mongohq.com:10043 with seeds my_db_name:27017
2014-09-17T21:58:12.806-0500 [ReplicaSetMonitorWatcher] starting
2014-09-17T21:58:12.919-0500 getaddrinfo("my_db_name") failed: nodename nor servname provided, or not known
2014-09-17T21:58:12.952-0500 warning: No primary detected for set kahana.mongohq.com:12345
2014-09-17T21:58:12.952-0500 All nodes for set kahana.mongohq.com:12345 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
couldn't connect to [kahana.mongohq.com:12345/my_db_name] connect failed to replica set kahana.mongohq.com:12345/my_db_name:27017
Not really sure what the problem is here. Any help would be greatly appreciated!
This should work:
mongoexport -h kahana.mongohq.com:12345 -u username -p password -d my_db -c usercollection -f "firstName,lastName,macIdNum,iclass" --csv
The '-h' hostname option is meant to specify only 'hostname/IP:port'.
We recently ported some data over to MongoDB and are now looking into running daily backups, preferably from a cron job, and restore one of the backups to a secondary mongo database.
Our system is set up as follows:
server 1: the development mongo database
server 2: two mongo databases, one for staging data and one for production
server 3: is where we run all of our cron jobs/batch scripts from.
I checked the mongo docs, and logged into our cron job server and tried to run the following command: (username, host, and password changed for security, I'm not actually connecting to localhost)
mongodump --host 127.0.0.1/development --port 27017 --username user --password pass --out /opt/backup/mongodump-2013-10-07-1
I get the following messages:
Mon Oct 7 10:03:42 starting new replica set monitor for replica set 127.0.0.1 with seed of development:27017
Mon Oct 7 10:03:42 successfully connected to seed development:27017 for replica set 127.0.0.1
Mon Oct 7 10:03:42 warning: node: development:27017 isn't a part of set: 127.0.0.1 ismaster: { ismaster: true, maxBsonObjectSize: 16777216, ok: 1.0 }
Mon Oct 7 10:03:44 replica set monitor for replica set 127.0.0.1 started, address is 127.0.0.1/
Mon Oct 7 10:03:44 [ReplicaSetMonitorWatcher] starting couldn't connect to [127.0.0.1/development:27017] connect failed to set 127.0.0.1/development:27017
I confirmed that I can connect to the mongo database using mongo -u -p ip/development
Our ultimate goal will be to dump the data from the production database and store it in the staging database. These two databases are both located on the same box, if that makes a difference, but for testing purposes I am just trying to get a backup of development test data.
mongo client can parse MongoDB connection string URI, so instead of specifying all connection parameters separately you may pass single connection string URI.
In your case you're trying to pass connection URI as a host, but 127.0.0.1/development is not a valid host name. It means you should specify database parameter separately from the host:
mongodump --host 127.0.0.1 -d development --port 27017 --username user --password pass --out /opt/backup/mongodump-2013-10-07-1
You can use with mongodump with --uri
mongodump --uri "mongodb://usersname:password#127.0.0.1:27100/dbname?replicaSet=replica_name&authSource=admin" --out "C:\Umesh"
All your collections will store inside the out folder it will create directory name as your Database name and all the collections are bson and metadata will store as json format.
For restore
mongorestore --uri "mongodb://usersname:password#127.0.0.1:27100/dbname?replicaSet=replica_name&authSource=admin" -d dbname mongodbumppath
Try this it will work.
This worked for me.
Reference: https://docs.mongodb.com/manual/reference/program/mongodump
Syntax 1:
mongodump --host <hostname:port> --db <database> --username <username> --password <password> --out <path>
Syntax 2:
mongodump -h <hostname:port> -d <database> -u <username> -p <password> -o <path>
Example 1:
mongodump --host 127.0.0.1:27017 --db db_app --username root --password secret --out /backup/db/app-17-03-07
Example 2:
mongodump -h 127.0.0.1:27017 -d db_app -u root -p secret -o /backup/db/app-17-03-07
mongodump --host remotehostip:port --db dbname -u username -p password
Here is an example of exporting collection from node server to local machine:
Host : xxx.xxx.xxx.xx
Port :27017
Username:”XXXX”
Password :”YYYY”
AuthDB : “admin”
“DB”: “mydb”
D:\mongodb-backup>mongodump -h xxx.xxx.xxx.xxx –port 27017 -u “XXXX” -p “YYYY” –authenticationDatabase “admin” –db “mydb”
Use this to get dump using URI:
mongodump --uri=mongodb+srv://john:xxxxxxxxxxxxxxx#cluster0-jdtjt.mongodb.net/sales
You can also use gzip for taking backup of one collection and compressing the backup on the fly
mongodump --db somedb --collection somecollection --out - | gzip > collectiondump.gz
Or with a date in the file name:
mongodump --db somedb --collection somecollection --out - | gzip > dump_`date "+%Y-%m-%d"`.gz
This worked to me like a charm for a single collection with a remote Windows Server.
mongodump --host <remote_ip> --port <mongo_port> --db <remote_db_name> --authenticationDatabase <remote_auth_db> --username <remote_mongo_username> --password <remote_db_pwd> --out <local_DB_backup_folder> --collection <remote_collection_name>
On Mac, this is what worked for me (but be sure to use your own real credentials):
brew tap mongodb/brew
brew install mongodb-community#5.0
brew services start mongodb/brew/mongodb-community
mongodump --uri "mongodb://usersname:password#127.0.0.1:27100/dbname" --out "/Users/some_username/code/mongodb_dumps/dump/"
cd /Users/some_username/code/mongodb_dumps/
mongorestore --nsInclude "*.*"
mongodump --host hostip -d dbname --port portnumber --username username --password password --authenticationDatabase admin -o ./path/of/backupfolder
note: "./path/of/backupfolder" path is in your client
This worked for me:
Step1: Export remote/local DB.
mongodump --uri "mongodb+srv://USER:PASSWORD........." --out "/Users/Hardik/Desktop/mongo_bkp"
Step2: Import
mongorestore ./mongo_bkp/
Posting this here in case it helps somebody.
It was impossible for me to connect using mongodump. I ended up installing the VS Code Mongo extension and it generated the string for me. The command looks like this:
mongodump -o dump_destination --uri "mongodb://<USERNAME>:<PASSWORD>#<HOST>:<PORT>/<DATABASENAME>?authSource=admin&readPreference=primary&ssl=true"