MongoDB index failing to create/build - mongodb

There was a related post which states it was fixed in 4.x but I'm facing this error with stable 6.0.2 as well.
Index creation:
Enterprise rs0 [direct: primary] bs> db.keyvalue.createIndex({"key": 1},{unique:true,name: "kv_key_idx"})
kv_key_idx
BUT, only the default index is present:
Enterprise rs0 [direct: primary] bs>
Indexes for keyvalue:
[ { v: 2, key: { _id: 1 }, name: '_id_' } ]
From the logs:
{"t":{"$date":"2022-11-02T12:46:02.500+00:00"},"s":"I", "c":"INDEX", "id":20438, "ctx":"conn919","msg":"Index build: registering","attr":{"buildUUID":{"uuid":{"$uuid":"d1c79715-78e7-4e4d-ae7b-af96be6b3a6b"}},"namespace":"bs.keyvalue","collectionUUID":{"uuid":{"$uuid":"f168d08c-6182-4b89-935c-cf2564bd184d"}},"indexes":1,"firstIndex":{"name":"kv_key_idx"},"command":{"createIndexes":"keyvalue","v":2,"indexes":[{"unique":true,"name":"kv_key_idx","key":{"key":1}}],"ignoreUnknownIndexOptions":false}}}
and it very much is failing:
{"t":{"$date":"2022-11-02T12:46:02.516+00:00"},"s":"I", "c":"INDEX", "id":20448, "ctx":"conn919","msg":"Index build: failed because collection dropped","attr":{"buildUUID":{"uuid":{"$uuid":"d1c79715-78e7-4e4d-ae7b-af96be6b3a6b"}},"namespace":"bs.keyvalue","collectionUUID":{"uuid":{"$uuid":"f168d08c-6182-4b89-935c-cf2564bd184d"}},"exception":{"code":26,"codeName":"NamespaceNotFound","errmsg":"Caught exception during index builder (d1c79715-78e7-4e4d-ae7b-af96be6b3a6b) initialization on namespace bs.keyvalue (f168d08c-6182-4b89-935c-cf2564bd184d). 1 index specs provided. First index spec: { v: 2, unique: true, key: { key: 1 }, name: \"kv_key_idx\" } :: caused by :: Collection not found: config.system.indexBuilds"}}}
The db & collection is very much there:
Enterprise rs0 [direct: primary] test> show dbs
READ__ME_TO_RECOVER_YOUR_DATA 40.00 KiB
admin 80.00 KiB
bs. 2.13 TiB
config 168.00 KiB
local 59.85 GiB
Enterprise rs0 [direct: primary] test> use bs
switched to db bs
Enterprise rs0 [direct: primary] bs> show collections
keyvalue
Enterprise rs0 [direct: primary] bs> use local
switched to db local
Enterprise rs0 [direct: primary] local> show collections
oplog.rs
replset.election
replset.initialSyncId
replset.minvalid
replset.oplogTruncateAfterPoint
startup_log
system.replset
system.rollback.id
system.tenantMigration.oplogView [view]
system.views
What am I missing here? Thanks!

Related

ERROR Configuring mongoDB using Ansible (MongoNetworkError: connect ECONNREFUSED)

I'm trying to configure a replicaset of mongodb using ansible,
I succeeded to install mongoDB on the primary server and created the replica-set configuration file except when I launch the playbook, I get an error of type: MongoNetworkError: connect ECONNREFUSED 3.142.150.62:28041
Does anyone have an idea please how to solve this?
attached, the playbook and the error on the Jenkins console
Playbook:
---
- name: Play1
hosts: hhe
#connection: local
become: true
#remote_user: ec2-user
#remote_user: root
tasks:
- name: Install gnupg
package:
name: gnupg
state: present
- name: Import the public key used by the package management system
shell: wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
- name: Create a list file for MongoDB
shell: echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
- name: Reload local package database
command: sudo apt-get update
- name: Installation of mongodb-org
package:
name: mongodb-org
state: present
update_cache: yes
- name: Start mongodb
service:
name: mongod
state: started
enabled: yes
- name: Play2
hosts: hhe
become: true
tasks:
- name: create directories on all the EC2 instances
shell: mkdir -p replicaset/member
- name: Play3
hosts: secondary1
become: true
tasks:
- name: Start mongoDB with the following command on secondary1
shell: nohup mongod --port 28042 --bind_ip localhost,ec2-18-191-39-71.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play4
hosts: secondary2
become: true
tasks:
- name: Start mongoDB with the following command on secondary2
shell: nohup mongod --port 28043 --bind_ip localhost,ec2-18-221-31-81.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play5
hosts: arbiter
become: true
tasks:
- name: Start mongoDB with the following command on arbiter
shell: nohup mongod --port 27018 --bind_ip localhost,ec2-13-58-35-255.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play6
hosts: primary
become: true
tasks:
- name: Start mongoDB with the following command on primary
shell: nohup mongod --port 28041 --bind_ip localhost,ec2-3-142-150-62.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Create replicaset initialize file
copy:
dest: /tmp/replicaset_conf.js
mode: "u=rw,g=r,o=rwx"
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
- name: Pause for a while
pause: seconds=20
- name: Initialize the replicaset
shell: mongo /tmp/replicaset_conf.js
The error on Jenkins Consol:
PLAY [Play6] *******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [primary]
TASK [Start mongoDB with the following command on primary] *********************
changed: [primary]
TASK [Create replicaset initialize file] ***************************************
ok: [primary]
TASK [Pause for a while] *******************************************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [primary]
TASK [Initialize the replicaset] ***********************************************
fatal: [primary]: FAILED! => {"changed": true, "cmd": "/usr/bin/mongo 3.142.150.62:28041 /tmp/replicaset_conf.js", "delta": "0:00:00.146406", "end": "2022-08-11 09:46:07.195269", "msg": "non-zero return code", "rc": 1, "start": "2022-08-11 09:46:07.048863", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version v5.0.10\nconnecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:372:17\n#(connect):2:6\nexception: connect failed\nexiting with code 1", "stdout_lines": ["MongoDB shell version v5.0.10", "connecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb", "Error: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :", "connect#src/mongo/shell/mongo.js:372:17", "#(connect):2:6", "exception: connect failed", "exiting with code 1"]}
You start the service already with
service:
name: mongod
state: started
enabled: yes
thus shell: nohup mongod ... & is pointless. You cannot start the mongod service multiple times, unless you use different port and dbPath. You should prefer to start the mongod as service, i.e. systemctl start mongod or similar instead of nohup mongod ... &. I prefer to use the configuration file (typically /etc/mongod.conf) rather than command line options.
Plain mongo command uses the default port 27017, i.e. it does not connect to the MongoDB instances you started in above task.
You should wait till replica set is initated. You can do it like this:
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
You configured an ARBITER. However, an arbiter node is useful only with an even number of Replica Set members. With 3 members it does not make much sense. Anyway, you don't add the arbiter to your Replica Set, so what is the reason to define it?
Just a note, you don't have to create a temp file, you can execute script directly, e.g. similar to this:
shell:
cmd: mongo --eval '{{ script }}'
executable: /bin/bash
vars:
script: |
var cfg =
{
"_id" : "replica_demo",
...
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
print(rs.status().ok)
register: ret
failed_when: ret.stdout_lines | last != "1"
Be aware of correct quoting.

Mongo : db.auth() fails on windows

I'm trying to run a mongo instance on a windows container.
I have found this answer regarding authentication but I does not work for me
MongoDB: Server has startup warnings ''Access control is not enabled for the database''
I have a cfg file which I'm using to start mongo, my image is based on an existing mongo docker image on top of which I'm just copying my config file amd I'm trying to instruct mongo to use it. I actually don't know if it really does this, but as far as I know the base image CMD is overriden with my new CMD.
This is the dockerfile
FROM mongo:windowsservercore-1809
WORKDIR c:\
COPY .\mongod.Win.cfg .
CMD ["mongod", "--auth", "-f", "mongod.Win.cfg"]
And this is my mongod.win.cfg
storage:
dbPath: C:\data\db
journal:
enabled: true
security:
authorization: enabled
And I'm building the image in a docker-compose
invoice_db:
build:
context: ./Invoice.Db
dockerfile: ./mongo.win.Dockerfile
image: mongo:v1
container_name: invoice-db
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: "admin"
MONGO_INITDB_ROOT_PASSWORD: "pass"
volumes:
- invoice-data-volume:c:\data\db
restart: unless-stopped
volumes:
invoice-data-volume:
name: invoice-data
When I ssh in the container and try to login as admin with the password pass I get this
PS C:\> mongo
MongoDB shell version v5.0.9
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }
MongoDB server version: 5.0.9
================
Warning: the "mongo" shell has been superseded by "mongosh",
which delivers improved usability and compatibility.The "mongo" shell has been deprecated and will be removed in
an upcoming release.
For installation instructions, see
https://docs.mongodb.com/mongodb-shell/install/
================
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
> use admin
switched to db admin
> db.auth("admin", "pass")
Error: Authentication failed.
0
> db.auth("admin", passwordPrompt())
Enter password:
Error: Authentication failed.
0
>
The logs from the running container.
{"t":{"$date":"2022-07-18T23:38:10.420+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { getCmdLineOpts: 1.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:18.120+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { listCollections: 1.0, filter: {}, nameOnly: true, authorizedCollections: true, maxTimeMS: 1000.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:21.712+03:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn1","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"admin","db":"admin"}}}
{"t":{"$date":"2022-07-18T23:38:21.713+03:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"127.0.0.1:49160","extraInfo":{},"error":"UserNotFound: Could not find user "admin" for db "admin""}}
{"t":{"$date":"2022-07-18T23:38:25.438+03:00"},"s":"I", "c":"ACCESS", "id":20436, "ctx":"conn1","msg":"Checking authorization failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not authorized on admin to execute command { listCollections: 1.0, filter: {}, nameOnly: true, authorizedCollections: true, maxTimeMS: 1000.0, lsid: { id: UUID("17467fb1-ecf9-426c-9041-0f15c3a47d30") }, $db: "admin" }"}}}
{"t":{"$date":"2022-07-18T23:38:32.311+03:00"},"s":"I", "c":"ACCESS", "id":20251, "ctx":"conn1","msg":"Supported SASL mechanisms requested for unknown user","attr":{"user":{"user":"admin","db":"admin"}}}
{"t":{"$date":"2022-07-18T23:38:32.312+03:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"127.0.0.1:49160","extraInfo":{},"error":"UserNotFound: Could not find user "admin" for db "admin""}}
{"t":{"$date":"2022-07-18T23:38:37.028+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176717:28384][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 34, snapshot max: 34 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
{"t":{"$date":"2022-07-18T23:39:37.051+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176777:50893][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
{"t":{"$date":"2022-07-18T23:40:37.067+03:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1658176837:67089][1272:140723313332832], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 39, snapshot max: 39 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
Can someone help me figure out this ?
try with:
db.auth({user:"admin", pwd:"pass", mechanism:"SCRAM"})

Mongod Replica set aborting after invariant() failure due to Stable timestamp Timestamp does not equal appliedThrough timestamp

I am a newbie to MongoDB. I was doing a POC on consuming documents using Java Client.
I am using a 4.2.5 version.
I have 3 instances of mongod running in my local with a Replica set as below.
mongod --port 27017 --dbpath /data/d1/ --replSet rs0 --bind_ip localhost
mongod --port 27018 --dbpath /data/d2/ --replSet rs0 --bind_ip localhost
mongod --port 27019 --dbpath /data/d3/ --replSet rs0 --bind_ip localhost
After a certain time, one or two of the instance gets abended and when I tend to start again, I see the same error. I am not sure about what causes this error.
Any help would be appreciated.
Error:
2020-05-25T19:37:47.126+0530 I REPL [initandlisten] Rollback ID is 1
2020-05-25T19:37:47.128+0530 F - [initandlisten] Invariant failure !stableTimestamp || stableTimestamp->isNull() || appliedThrough.isNull() || *stableTimestamp == appliedThrough.getTimestamp() Stable timestamp Timestamp(1590410112, 1) does not equal appliedThrough timestamp { ts: Timestamp(1590410172, 1), t: 5 } src/mongo/db/repl/replication_recovery.cpp 412
2020-05-25T19:37:47.128+0530 F - [initandlisten]
***aborting after invariant() failure
2020-05-25T19:37:47.137+0530 F - [initandlisten] Got signal: 6 (Abort trap: 6).
0x109e10cc6 0x109e1054d 0x7fff5d3c9b5d 0xa00 0x7fff5d2836a6 0x109e04d4a 0x1083597af 0x1083722ba 0x108376eb9 0x108077c6c 0x108071744 0x108070999 0x7fff5d1de3d5 0x9
----- BEGIN BACKTRACE -----
"backtrace":[{"b":"10806F000","o":"1DA1CC6","s":"_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE"},{"b":"10806F000","o":"1DA154D","s":"_ZN5mongo12_GLOBAL__N_110abruptQuitEi"},{"b":"7FFF5D3C5000","o":"4B5D","s":"_sigtramp"},{"b":"0","o":"A00"},
...
...
...
...
{ "path" : "/System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement", "machType" : 6, "b" : "7FFF41F95000", "vmaddr" : "7FFF3C6CD000", "buildId" : "2A396FC07B7930889A82FB93C1181A57" }, { "path" : "/usr/lib/libxslt.1.dylib", "machType" : 6, "b" : "7FFF5C842000", "vmaddr" : "7FFF56F7A000", "buildId" : "EC50E503AEEE3F50956F55E4AF4584D9" }, { "path" : "/System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP", "machType" : 6, "b" : "7FFF4177E000", "vmaddr" : "7FFF3BEB6000", "buildId" : "EDD16B2E4F353E13B389CF77B3CAD4EB" } ] }}
mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x36) [0x109e10cc6]
mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0xBD) [0x109e1054d]
libsystem_platform.dylib(_sigtramp+0x1D) [0x7fff5d3c9b5d]
??? [0xa00]
libsystem_c.dylib(abort+0x7F) [0x7fff5d2836a6]
mongod(_ZN5mongo22invariantFailedWithMsgEPKcRKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEES1_j+0x33A) [0x109e04d4a]
mongod(_ZN5mongo4repl23ReplicationRecoveryImpl16recoverFromOplogEPNS_16OperationContextEN5boost8optionalINS_9TimestampEEE+0x43F) [0x1083597af]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl21_startLoadLocalConfigEPNS_16OperationContextE+0x3AA) [0x1083722ba]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl7startupEPNS_16OperationContextE+0xE9) [0x108376eb9]
mongod(_ZN5mongo12_GLOBAL__N_114_initAndListenEi+0x28FC) [0x108077c6c]
mongod(_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPcS2_+0xDA4) [0x108071744]
mongod(main+0x9) [0x108070999]
libdyld.dylib(start+0x1) [0x7fff5d1de3d5]
??? [0x9]
----- END BACKTRACE -----
Abort trap: 6
There appears to be a ticket for this issue experienced by another user. You may consider engaging with MongoDB developers in that ticket to provide the requested information.

Graylog container cannot connect to MongoDB container

I have some troubles setting up Graylog2 under docker. Everything works until I try using authentication. All I get is the following error repeated forever.
Trying both root and graylog user (in both graylog and admin db) gives the same result.
The log from mongodb says both users are created during setup. But graylog says it does not find any graylog user in database graylog. Same with user root.
I'm new to MongoDB and have no idea how authentication works. But from what I understand authentication (similar to --auth parameter) is activated when providing user/pw for root account (https://github.com/docker-library/mongo/pull/145).
Is it possible that Graylog ses a different authentication mechanism than MongoDB is excpecting? See line #158 in the pasted log
Error message as root user
mongodb_1 | 2017-04-16T13:27:52.486+0000 I NETWORK [thread1] connection accepted from 172.18.0.4:46566 #12 (1 connection now open)
mongodb_1 | 2017-04-16T13:27:52.495+0000 I NETWORK [conn12] received client metadata from 172.18.0.4:46566 conn12: { driver: { name: "mongo-java-driver", version: "unknown" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.4.0-72-generic" }, platform: "Java/Oracle Corporation/1.8.0_72-internal-b15" }
mongodb_1 | 2017-04-16T13:27:52.525+0000 I ACCESS [conn12] SCRAM-SHA-1 authentication failed for root on graylog from client 172.18.0.4:46566 ; UserNotFound: Could not find user root#graylog
mongodb_1 | 2017-04-16T13:27:52.543+0000 I - [conn12] end connection 172.18.0.4:46566 (1 connection now open)
Error message as graylog user (Full log on pastebin)
mongodb_1 | 2017-04-16T15:47:48.404+0000 I NETWORK [thread1] connection accepted from 172.18.0.4:41602 #7 (1 connection now open)
mongodb_1 | 2017-04-16T15:47:48.410+0000 I NETWORK [conn7] received client metadata from 172.18.0.4:41602 conn7: { driver: { name: "mongo-java-driver", version: "unknown" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.4.0-72-generic" }, platform: "Java/Oracle Corporation/1.8.0_72-internal-b15" }
mongodb_1 | 2017-04-16T15:47:48.418+0000 I ACCESS [conn7] SCRAM-SHA-1 authentication failed for graylog on graylog from client 172.18.0.4:41602 ; UserNotFound: Could not find user graylog#graylog
mongodb_1 | 2017-04-16T15:47:48.423+0000 I - [conn7] end connection 172.18.0.4:41602 (1 connection now open)
This is my ./docker-composer.yml
version: '2'
services:
mongodb:
build: ./mongodb
volumes:
- /docker/mongodb/data:/data/db
elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
volumes:
- /docker/elasticsearch/data:/usr/share/elasticsearch/data
graylog:
image: graylog2/server
volumes:
- /docker/graylog/journal:/usr/share/graylog/data/journal
- /docker/graylog/config:/usr/share/graylog/data/config
environment:
#GRAYLOG_MONGODB_URI: mongodb://root:drUqGGCMh#mongodb:27017/graylog
GRAYLOG_MONGODB_URI: mongodb://graylog:vWGzncmBe9#mongodb:27017/graylog
depends_on:
- mongodb
- elasticsearch
ports:
- "9000:9000"
./mongodb/Dockerfile
FROM mongo:3
ENV MONGO_INITDB_ROOT_USERNAME: root
ENV MONGO_INITDB_ROOT_PASSWORD: drUqGGCMh
ADD grayloguser.js /docker-entrypoint-initdb.d/grayloguser.js
./mogodb/grayloguser.js
db.getSiblingDB('graylog');
db.createUser(
{
user: "graylog",
pwd: "vWGzncmBe9",
roles: [
{ role: "dbOwner", db: "graylog" }
]
}
);
Your MongoDB script is incorrect.
Either assign the return value of db.getSiblingDB('graylog') to a variable and use that for createUser(), or keep using use graylog instead:
graylog = db.getSiblingDB('graylog');
graylog.createUser(
{
user: "graylog",
pwd: "vWGzncmBe9",
roles: [
{ role: "dbOwner", db: "graylog" }
]
}
);
In other words, just stick to the MongoDB documentation: https://docs.mongodb.com/manual/tutorial/create-users/#username-password-authentication

mongorestore not working. collection is empty

i am trying to dump a mongodb collection to file, and then use that to restore to another mongodb instance.
dumping -
mongodump --host 127.0.0.1 --port 27017 --username vespauser --password <passwd> --collection vespastats --db vespa --out /archive/vespa-archive/vespa-db-backup_001
connected to: 127.0.0.1:27017
2015-04-21T16:24:07.070-0400 DATABASE: vespa to /archive/vespa-archive/vespa-db-backup_testing01/vespa
2015-04-21T16:24:07.141-0400 vespa.system.indexes to /archive/vespa-archive/vespa-db-backup_testing01/vespa/system.indexes.bson
2015-04-21T16:24:07.148-0400 4 documents
2015-04-21T16:24:07.149-0400 vespa.vespastats to /archive/vespa-archive/vespa-db-backup_testing01/vespa/vespastats.bson
2015-04-21T16:24:07.316-0400 59724 documents
2015-04-21T16:24:08.118-0400 Metadata for vespa.vespastats to /archive/vespa-archive/vespa-db-backup_testing01/vespa/vespastats.metadata.json
restoring -
mongorestore -v --drop --host 127.0.0.1 --port 27017 --username admin --password <passwd> /archive/vespa-archive/vespa-db-backup_001
2015-04-21T16:31:11.962-0400 creating new connection to:127.0.0.1:27017
2015-04-21T16:31:11.963-0400 [ConnectBG] BackgroundJob starting: ConnectBG
2015-04-21T16:31:11.963-0400 connected to server 127.0.0.1:27017 (127.0.0.1)
2015-04-21T16:31:11.963-0400 connected connection!
connected to: 127.0.0.1:27017
2015-04-21T16:31:11.966-0400 /home/amurty/vespa-db/vespa-db-backup_testing01/vespa/vespastats.bson
2015-04-21T16:31:11.966-0400 going into namespace [vespa.vespastats]
2015-04-21T16:31:11.966-0400 dropping
file size: 88808161
59724 objects found
2015-04-21T16:31:13.730-0400 Creating index: { key: { _id: 1 }, name: "_id_", ns: "vespa.vespastats" }
2015-04-21T16:31:13.848-0400 Creating index: { key: { url: 1 }, name: "url_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.858-0400 Creating index: { key: { r_tstpm: 1 }, name: "r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.859-0400 Creating index: { key: { url: 1, r_tstpm: 1 }, name: "url_1_r_tstpm_1", ns: "vespa.vespastats", background: true }
from /var/log/mongodb/mongod.log -
2015-04-21T16:31:11.963-0400 [initandlisten] connection accepted from 127.0.0.1:58444 #23 (1 connection now open)
2015-04-21T16:31:11.964-0400 [conn23] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "admin", key: "xxx" }
2015-04-21T16:31:11.968-0400 [conn23] CMD: drop vespa.vespastats
2015-04-21T16:31:13.757-0400 [conn23] allocating new ns file /var/lib/mongo/vespa.ns, filling with zeroes...
2015-04-21T16:31:13.838-0400 [FileAllocator] allocating new datafile /var/lib/mongo/vespa.0, filling with zeroes...
2015-04-21T16:31:13.846-0400 [FileAllocator] done allocating datafile /var/lib/mongo/vespa.0, size: 64MB, took 0.007 secs
2015-04-21T16:31:13.847-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "vespa.vespastats" }
2015-04-21T16:31:13.848-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.857-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { url: 1 }, name: "url_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.857-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.858-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { r_tstpm: 1 }, name: "r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.859-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.860-0400 [conn23] build index on: vespa.vespastats properties: { v: 1, key: { url: 1, r_tstpm: 1 }, name: "url_1_r_tstpm_1", ns: "vespa.vespastats", background: true }
2015-04-21T16:31:13.860-0400 [conn23] added index to empty collection
2015-04-21T16:31:13.862-0400 [conn23] end connection 127.0.0.1:58444 (0 connections now open)
now when i login to my new mongodb instance and check collection size, i get a big 0 -
# mongo
MongoDB shell version: 2.6.9
connecting to: test
> use vespa
switched to db vespa
> db.auth('vespauser', '<paswd>')
1
> db.vespastats.find()
> db.vespastats.count()
0
>
Collection may or may not exist in the used database but the query is not returning an error, just 0.
db.vespastats.find().count()
The issue should be because it is added to database test. (doc mentions it should be automatic but I was able to reproduce this behaviour).
Therefore
use test
db.vespastats.find().count()
would have returned the actual documents in the collection vespastats.
The issue is caused by not specifying db name when using mongo binary command mongorestore. doc for mongorestore mongorestore --nsInclude=vesta.vestastats should be the updated version (even if -d still works).
To know where the collection would land, I would run 2 times the restore dump and check show dbs in mongo shell 3 times (before and after) > the db size is changing (not immediately though as it may show 8kb right after the restoration).