ERROR Configuring mongoDB using Ansible (MongoNetworkError: connect ECONNREFUSED) - mongodb

I'm trying to configure a replicaset of mongodb using ansible,
I succeeded to install mongoDB on the primary server and created the replica-set configuration file except when I launch the playbook, I get an error of type: MongoNetworkError: connect ECONNREFUSED 3.142.150.62:28041
Does anyone have an idea please how to solve this?
attached, the playbook and the error on the Jenkins console
Playbook:
---
- name: Play1
hosts: hhe
#connection: local
become: true
#remote_user: ec2-user
#remote_user: root
tasks:
- name: Install gnupg
package:
name: gnupg
state: present
- name: Import the public key used by the package management system
shell: wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
- name: Create a list file for MongoDB
shell: echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
- name: Reload local package database
command: sudo apt-get update
- name: Installation of mongodb-org
package:
name: mongodb-org
state: present
update_cache: yes
- name: Start mongodb
service:
name: mongod
state: started
enabled: yes
- name: Play2
hosts: hhe
become: true
tasks:
- name: create directories on all the EC2 instances
shell: mkdir -p replicaset/member
- name: Play3
hosts: secondary1
become: true
tasks:
- name: Start mongoDB with the following command on secondary1
shell: nohup mongod --port 28042 --bind_ip localhost,ec2-18-191-39-71.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play4
hosts: secondary2
become: true
tasks:
- name: Start mongoDB with the following command on secondary2
shell: nohup mongod --port 28043 --bind_ip localhost,ec2-18-221-31-81.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play5
hosts: arbiter
become: true
tasks:
- name: Start mongoDB with the following command on arbiter
shell: nohup mongod --port 27018 --bind_ip localhost,ec2-13-58-35-255.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play6
hosts: primary
become: true
tasks:
- name: Start mongoDB with the following command on primary
shell: nohup mongod --port 28041 --bind_ip localhost,ec2-3-142-150-62.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Create replicaset initialize file
copy:
dest: /tmp/replicaset_conf.js
mode: "u=rw,g=r,o=rwx"
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
- name: Pause for a while
pause: seconds=20
- name: Initialize the replicaset
shell: mongo /tmp/replicaset_conf.js
The error on Jenkins Consol:
PLAY [Play6] *******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [primary]
TASK [Start mongoDB with the following command on primary] *********************
changed: [primary]
TASK [Create replicaset initialize file] ***************************************
ok: [primary]
TASK [Pause for a while] *******************************************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [primary]
TASK [Initialize the replicaset] ***********************************************
fatal: [primary]: FAILED! => {"changed": true, "cmd": "/usr/bin/mongo 3.142.150.62:28041 /tmp/replicaset_conf.js", "delta": "0:00:00.146406", "end": "2022-08-11 09:46:07.195269", "msg": "non-zero return code", "rc": 1, "start": "2022-08-11 09:46:07.048863", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version v5.0.10\nconnecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:372:17\n#(connect):2:6\nexception: connect failed\nexiting with code 1", "stdout_lines": ["MongoDB shell version v5.0.10", "connecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb", "Error: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :", "connect#src/mongo/shell/mongo.js:372:17", "#(connect):2:6", "exception: connect failed", "exiting with code 1"]}

You start the service already with
service:
name: mongod
state: started
enabled: yes
thus shell: nohup mongod ... & is pointless. You cannot start the mongod service multiple times, unless you use different port and dbPath. You should prefer to start the mongod as service, i.e. systemctl start mongod or similar instead of nohup mongod ... &. I prefer to use the configuration file (typically /etc/mongod.conf) rather than command line options.
Plain mongo command uses the default port 27017, i.e. it does not connect to the MongoDB instances you started in above task.
You should wait till replica set is initated. You can do it like this:
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
You configured an ARBITER. However, an arbiter node is useful only with an even number of Replica Set members. With 3 members it does not make much sense. Anyway, you don't add the arbiter to your Replica Set, so what is the reason to define it?
Just a note, you don't have to create a temp file, you can execute script directly, e.g. similar to this:
shell:
cmd: mongo --eval '{{ script }}'
executable: /bin/bash
vars:
script: |
var cfg =
{
"_id" : "replica_demo",
...
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
print(rs.status().ok)
register: ret
failed_when: ret.stdout_lines | last != "1"
Be aware of correct quoting.

Related

Cannot create a mongo database with docker

I'm having trouble creating a mongo database using the docker-compose command. Docker desktop tells me that everything is up and running including the db, but all I get is the standard 'admin, config, local' not the db I want to create. Here's my docker-compose.yaml
version: '3'
services:
app:
build: ./
entrypoint: ./.docker/entrypoint.sh
ports:
- 3000:3000
volumes:
- .:/home/node/app
depends_on:
- db
db:
image: mongo:4.4.4
restart: always
volumes:
- ./.docker/dbdata:/data/db
- ./.docker/mongo:/docker-entrypoint-initdb.d
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=nest
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_SERVER=db
- ME_CONFIG_MONGODB_AUTH_USERNAME=root
- ME_CONFIG_MONGODB_AUTH_PASSWORD=root
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=root
depends_on:
- db
my init.js inside .docker/mongo
db.routes.insertMany([
{
_id: "1",
title: "Primeiro",
startPosition: {lat: -15.82594, lng: -47.92923},
endPosition: {lat: -15.82942, lng: -47.92765},
},
{
_id: "2",
title: "Segundo",
startPosition: {lat: -15.82449, lng: -47.92756},
endPosition: {lat: -15.82776, lng: -47.92621},
},
{
_id: "3",
title: "Terceiro",
startPosition: {lat: -15.82331, lng: -47.92588},
endPosition: {lat: -15.82758, lng: -47.92532},
}
]);
and my dockerfile
FROM node:14.18.1-alpine
RUN apk add --no-cache bash
RUN npm install -g #nestjs/cli
USER node
WORKDIR /home/node/app
and this is the 'error' log I get from docker when I run the nest container with mongodb, nest app and mongo express(there is actually a lot more but SO keeps thinking that it is spam for some reason.
about to fork child process, waiting until server is ready for connections.
Successfully added user: {
"user" : "root",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory
{"t":{"$date":"2022-06-01T19:39:15.542+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"127.0.0.1:39304","connectionId":2,"connectionCount":0}}
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.js
{"t":{"$date":"2022-06-01T19:39:15.683+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:39310","connectionId":3,"connectionCount":1}}
{"t":{"$date":"2022-06-01T19:39:15.684+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn3","msg":"client metadata","attr":{"remote":"127.0.0.1:39310","client":"conn3","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.4"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}}
{"t":{"$date":"2022-06-01T19:39:15.701+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"conn3","msg":"createCollection","attr":{"namespace":"nest.routes","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"f689868e-af6d-4ec6-b555-dcf520f24788"}},"options":{}}}
{"t":{"$date":"2022-06-01T19:39:15.761+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"conn3","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"nest.routes","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
uncaught exception: ReferenceError: colection is not defined :
#/docker-entrypoint-initdb.d/init.js:23:1
failed to load: /docker-entrypoint-initdb.d/init.js
exiting with code -3
this is what running docker-compose ps shows
NAME COMMAND SERVICE STATUS PORTS
nest-api-app-1 "./.docker/entrypoin…" app running 0.0.0.0:3000->3000/tcp
nest-api-db-1 "docker-entrypoint.s…" db running 27017/tcp
nest-api-mongo-express-1 "tini -- /docker-ent…" mongo-express running 0.0.0.0:8081->8081/tcp
this what my docker desktop shows
The MongoDB container only creates a database if no database already exists. You probably already have one, which is why a new database isn't created and your initialization script isn't run.
Delete the contents of ./.docker/dbdata on the host. Then start the containers with docker-compose and Mongo should create your database for you.

Cannot configure a Mongo replicaSet from docker init script

I am trying to set up a 2 node replicaSet in docker for local development only. Single node already works fine, but there are keyfile issues when trying to add a member as part of the docker init script (NB I see the keyfile is set correctly from the logs). The same command works fine from a shell though, not via the init script.
Basically, the current config has worked fine for one node, but adding another gives the following error:
mongo_1 | {"t":{"$date":"2021-07-21T16:33:19.583+00:00"},"s":"W", "c":"REPL", "id":23724, "ctx":"ReplCoord-0","msg":"Got error response on heartbeat request","attr":{"hbStatus":{"code":13,"codeName":"Unauthorized","errmsg":"command replSetHeartbeat requires authentication"},"requestTarget":"mongo-secondary:27017","hbResp":{"ok":1.0}}}
mongo_1 | {"t":{"$date":"2021-07-21T16:33:19.583+00:00"},"s":"E", "c":"REPL", "id":21426, "ctx":"conn2","msg":"replSetInitiate failed","attr":{"error":{"code":74,"codeName":"NodeNotFound","errmsg":"replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo-secondary:27017 failed with command replSetHeartbeat requires authentication"}}}
If I remove mongo-secondary from the set, then after startup use a shell to mongo to load the exact same config, everything works fine (they keyfile is used and the set is made with both members).
Currently my config is:
# docker-compose.yml
mongo: &MONGO
image: mongo:4.4
restart: unless-stopped
volumes:
- mongo_data:/data/db
- ./scripts/docker/mongo/001_mongo_init.js:/docker-entrypoint-initdb.d/001_mongo_init.js:ro
- ./scripts/docker/mongo/mongo-entrypoint.sh:/mongo-entrypoint
- ./conf/mongodb/mongod-config.yml:/etc/mongod.yml
entrypoint: sh /mongo-entrypoint
ports:
- 27017:27017
env_file:
- ./env/mongo.env
command: --auth --config /etc/mongod.yml
extra_hosts:
- mongo:127.0.0.1
mongo-secondary:
<<: *MONGO
volumes:
- mongo_secondary_data:/data/db
- ./scripts/docker/mongo/mongo-entrypoint.sh:/mongo-entrypoint
- ./conf/mongodb/mongod-config.yml:/etc/mongod.yml
ports:
- 27018:27017
extra_hosts:
- mongo-secondary:127.0.0.1
# mongo-entrypoint.sh
#!/bin/sh
set -eu
# Create the keyfile used for mongo replicaSet auth.
keyfile=/home/keyfile
echo "Creating replicaSet keyfile..."
echo "keyfile" > ${keyfile}
chmod 0400 $keyfile
chown mongodb $keyfile
echo "Created replicaSet keyfile."
# original entrypoint
exec docker-entrypoint.sh "$#"
// 001_mongo_init.js
function getEnv(envVar) {
const ret = run('sh', '-c', `printenv ${envVar} > /tmp/${envVar}.txt`);
if (ret !== 0) throw Error(`Value "${envVar}" is not present in the environment.`);
return cat(`/tmp/${envVar}.txt`).trim(); // NB cat leaves a \n at the end of text
}
// create replicaset
const rsconf = {
_id: getEnv('MONGODB_REPLICA_SET'),
members: [
{
_id: 0,
host: 'mongo:27017',
},
{
_id: 1,
host: 'mongo-secondary:27017',
priority: 0, // prevent from becoming master
},
],
};
rs.initiate(rsconf);
rs.conf();
// further code to create users etc.
# mongod-config.yml
---
security:
keyFile: /home/keyfile
replication:
replSetName: rs0
enableMajorityReadConcern: true

how openstack remove offline host node by kolla-ansible

I have an offline host node which includes (compute node, control node and storage node). This host node was shutdown by accident and can't recover to online. All services about that node are down and enable but I can't set to disable.
So I can't remove the host by:
kolla-ansible -i multinode stop --yes-i-really-really-mean-it --limit node-17
I get this error:
TASK [Gather facts] ********************************************************************************************************************************************************************************************************************
fatal: [node-17]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host node-17 port 22: Connection timed out", "unreachable": true}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
node-17 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can I remove that offline host node? THX.
PS: Why I remove that offline host?
node-14(online) : **manage node which kolla-ansible installed**; compute node, control node and storage node
node-15(online) : compute node, control node and storage node
node-17(offline) : compute node, control node and storage node
osc99 (adding) : compute node, control node and storage node
Because when I deploy a new host(osc99) with (the multinode file had comment the node-17 line):
kolla-ansible -i multinode deploy --limit osc99
kolla-ansible will report error:
TASK [keystone : include_tasks] ********************************************************************************************************************************************************************************************************
included: .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml for osc99
TASK [keystone : Waiting for Keystone SSH port to be UP] *******************************************************************************************************************************************************************************
ok: [osc99]
TASK [keystone : Initialise fernet key authentication] *********************************************************************************************************************************************************************************
ok: [osc99 -> node-14]
TASK [keystone : Run key distribution] *************************************************************************************************************************************************************************************************
fatal: [osc99 -> node-14]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "-t", "keystone_fernet", "/usr/bin/fernet-push.sh"], "delta": "0:00:04.006900", "end": "2021-07-12 10:14:05.217609", "msg": "non-zero return code", "rc": 255, "start": "2021-07-12 10:14:01.210709", "stderr": "", "stderr_lines": [], "stdout": "Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.\r\r\nssh: connect to host node.17 port 8023: No route to host\r\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\r\nrsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]", "stdout_lines": ["Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.", "", "ssh: connect to host node.17 port 8023: No route to host", "", "rsync: connection unexpectedly closed (0 bytes received so far) [sender]", "rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]"]}
NO MORE HOSTS LEFT *********************************************************************************************************************************************************************************************************************
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
osc99 : ok=120 changed=55 unreachable=0 failed=1 skipped=31 rescued=0 ignored=1
How could I fixed this error, this is the main point whether or not I can remove the offline host.
Maybe I could fixed that by change the init_fernet.yml file:
node-14:~$ cat .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml
---
- name: Waiting for Keystone SSH port to be UP
wait_for:
host: "{{ api_interface_address }}"
port: "{{ keystone_ssh_port }}"
connect_timeout: 1
register: check_keystone_ssh_port
until: check_keystone_ssh_port is success
retries: 10
delay: 5
- name: Initialise fernet key authentication
become: true
command: "docker exec -t keystone_fernet kolla_keystone_bootstrap {{ keystone_username }} {{ keystone_groupname }}"
register: fernet_create
changed_when: fernet_create.stdout.find('localhost | SUCCESS => ') != -1 and (fernet_create.stdout.split('localhost | SUCCESS => ')[1]|from_json).changed
until: fernet_create.stdout.split()[2] == 'SUCCESS' or fernet_create.stdout.find('Key repository is already initialized') != -1
retries: 10
delay: 5
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
- name: Run key distribution
become: true
command: docker exec -t keystone_fernet /usr/bin/fernet-push.sh
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
by changing the delegate_to: "{{ groups['keystone'][0] }}? But I can't implement that.

Having issues with rs.add() in ansible playbook for mongo

I am using below tasks in my playbook to initialize cluster and add secondary to primary:
- name: Initialize replica set
run_once: true
delegate_to: host1
shell: >
mongo --eval 'printjson(rs.initiate())'
- name: Format secondaries
run_once: true
local_action:
module: debug
msg: '"{{ item }}:27017"'
with_items: ['host2', 'host3']
register: secondaries
- name: Add secondaries
run_once: true
delegate_to: host1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item.msg }}))'
with_items: secondaries.results
I am getting below error:
TASK [mongodb-setup : Add secondaries] *******************************
fatal: [host1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'msg'\n\nThe error appears to have been in '/var/lib/awx/projects/_dev/roles/mongodb-setup/tasks/users.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Add secondaries\n ^ here\n"}
Thanks for the response, I have amended my code as below
- name: Add secondaries
run_once: true
delegate_to: host-1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item }}:27017))'
with_items:
- host2
- host3
but getting below error
failed: [host-2 -> host-1] (item=host-2) => {"changed": true, "cmd": "/usr/bin/mongo --eval 'printjson(rs.add(host-2:27017))'", "delta": "0:00:00.173077", "end": "2019-08-06 13:29:09.422560", "item": "host-2", "msg": "non-zero return code", "rc": 252, "start": "2019-08-06 13:29:09.249483", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version: 3.2.22\nconnecting to: test\n2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37", "stdout_lines": ["MongoDB shell version: 3.2.22", "connecting to: test", "2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37"]}
You issue is not with rs.add() but with the data you loop over. In your last task, your item list is a single string.
# Wrong #
with_items: secondaries.results
You want to pass an actual list form your previously registered result:
with_items: "{{ secondaries.results }}"
That being said, registering the result of a debug task is rather odd. You should use set_fact to register what you need in a var, or better directly loop other your list of hosts in your task. It also looks like the rs.add funcion is exepecting a string so you should quote the argument in your eval. Something like:
- name: Add secondaries
shell: >
/usr/bin/mongo --eval 'printjson(rs.add("{{ item }}:27017"))'
with_items:
- host2
- host3
And the way you use delegation seems rather strange to me in this context but it's hard to give any valid clues without a complete playbook example of what you are trying to do (that you might give in a new question if necessary).

Initialising a mongo replica set from ansible

I am using Stouts.mongodb as the ansible role for installing MongoDB.
I have the following in the ansible playbook:
---
- hosts: all
sudo: true
roles:
- Stouts.mongodb
vars:
- mongodb_conf_replSet: rs0
- mongodb_conf_bind_ip: 192.168.111.11
tasks:
- name: Initialising replica set in mongo
command: mongo 192.168.111.11:27017 --eval "rs.initiate()"
I have tried this script without the last command task and everything works fine. It's just always I need to ssh into the box and do rs.initiate() myself. I thought of doing it from ansible but I get the following error:
==> mongo-test: TASK [Initialising replica set in mongo] ***************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:11
==> mongo-test: fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["mongo", "192.168.111.11:27017", "--eval", "rs.initiate()"], "delta": "0:00:00.069123", "end": "2016-03-04 20:58:27.513704", "failed": true, "rc": 1, "start": "2016-03-04 20:58:27.444581", "stderr": "exception: connect failed", "stdout": "MongoDB shell version: 2.6.11\nconnecting to: 192.168.111.11:27017/test\n2016-03-04T20:58:27.508+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused\n2016-03-04T20:58:27.508+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148", "stdout_lines": ["MongoDB shell version: 2.6.11", "connecting to: 192.168.111.11:27017/test", "2016-03-04T20:58:27.508+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused", "2016-03-04T20:58:27.508+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148"], "warnings": []}
==> mongo-test:
==> mongo-test: NO MORE HOSTS LEFT *************************************************************
==> mongo-test:
==> mongo-test: RUNNING HANDLER [Stouts.mongodb : mongodb restart] *****************************
==> mongo-test: to retry, use: --limit #mongo_playbook.retry
==> mongo-test:
==> mongo-test: PLAY RECAP *********************************************************************
==> mongo-test: 127.0.0.1 : ok=15 changed=8 unreachable=0 failed=1
Am I doing this right way?
Is there an alternative way to do this?
UPDATE
I tried wait_for:
---
- hosts: all
sudo: true
roles:
- Stouts.mongodb
vars:
- mongodb_conf_replSet: rs0
- mongodb_conf_bind_ip: 192.168.111.11
tasks:
- name: Waiting for port to be available
wait_for: host=192.168.111.11 port=27017 delay=10 state=drained timeout=160
- name: Initialising replica set in mongo
command: mongo 192.168.111.11:27017 --eval "rs.initiate()"
But again that same error:
==> mongo-test: TASK [Waiting for port to be available] ****************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:11
==> mongo-test: ok: [127.0.0.1] => {"changed": false, "elapsed": 10, "path": null, "port": 27017, "search_regex": null, "state": "drained"}
==> mongo-test:
==> mongo-test: TASK [Initialising replica set in mongo] ***************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:13
==> mongo-test: fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["mongo", "192.168.111.11:27017", "--eval", "rs.initiate()"], "delta": "0:00:00.092468", "end": "2016-03-04 21:38:02.597946", "failed": true, "rc": 1, "start": "2016-03-04 21:38:02.505478", "stderr": "exception: connect failed", "stdout": "MongoDB shell version: 2.6.11\nconnecting to: 192.168.111.11:27017/test\n2016-03-04T21:38:02.592+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused\n2016-03-04T21:38:02.593+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148", "stdout_lines": ["MongoDB shell version: 2.6.11", "connecting to: 192.168.111.11:27017/test", "2016-03-04T21:38:02.592+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused", "2016-03-04T21:38:02.593+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148"], "warnings": []}
==> mongo-test:
==> mongo-test: NO MORE HOSTS LEFT *************************************************************
==> mongo-test:
==> mongo-test: RUNNING HANDLER [Stouts.mongodb : mongodb restart] *****************************
==> mongo-test: to retry, use: --limit #mongo_playbook.retry
==> mongo-test:
==> mongo-test: PLAY RECAP *********************************************************************
==> mongo-test: 127.0.0.1 : ok=16 changed=8 unreachable=0 failed=1
I am also adding my mongod.conf:
auth = False
bind_ip = 192.168.111.11
cpu = True
dbpath = /data/db
fork = False
httpinterface = False
ipv6 = False
journal = False
logappend = True
logpath = /var/log/mongodb/mongod.log
maxConns = 1000000
noprealloc = False
noscripting = False
notablescan = False
port = 27017
quota = False
quotaFiles = 8
syslog = False
smallfiles = False
# Replica set options:
replSet = rs0
replIndexPrefetch = all
If you know the sequence of reboots for mongodb you can do something like this
- name: wait for service, shutdown, and service
wait_for: state={{item}} port=7777
with_items:
- present
- drained
- present