Initialising a mongo replica set from ansible - mongodb

I am using Stouts.mongodb as the ansible role for installing MongoDB.
I have the following in the ansible playbook:
---
- hosts: all
sudo: true
roles:
- Stouts.mongodb
vars:
- mongodb_conf_replSet: rs0
- mongodb_conf_bind_ip: 192.168.111.11
tasks:
- name: Initialising replica set in mongo
command: mongo 192.168.111.11:27017 --eval "rs.initiate()"
I have tried this script without the last command task and everything works fine. It's just always I need to ssh into the box and do rs.initiate() myself. I thought of doing it from ansible but I get the following error:
==> mongo-test: TASK [Initialising replica set in mongo] ***************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:11
==> mongo-test: fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["mongo", "192.168.111.11:27017", "--eval", "rs.initiate()"], "delta": "0:00:00.069123", "end": "2016-03-04 20:58:27.513704", "failed": true, "rc": 1, "start": "2016-03-04 20:58:27.444581", "stderr": "exception: connect failed", "stdout": "MongoDB shell version: 2.6.11\nconnecting to: 192.168.111.11:27017/test\n2016-03-04T20:58:27.508+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused\n2016-03-04T20:58:27.508+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148", "stdout_lines": ["MongoDB shell version: 2.6.11", "connecting to: 192.168.111.11:27017/test", "2016-03-04T20:58:27.508+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused", "2016-03-04T20:58:27.508+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148"], "warnings": []}
==> mongo-test:
==> mongo-test: NO MORE HOSTS LEFT *************************************************************
==> mongo-test:
==> mongo-test: RUNNING HANDLER [Stouts.mongodb : mongodb restart] *****************************
==> mongo-test: to retry, use: --limit #mongo_playbook.retry
==> mongo-test:
==> mongo-test: PLAY RECAP *********************************************************************
==> mongo-test: 127.0.0.1 : ok=15 changed=8 unreachable=0 failed=1
Am I doing this right way?
Is there an alternative way to do this?
UPDATE
I tried wait_for:
---
- hosts: all
sudo: true
roles:
- Stouts.mongodb
vars:
- mongodb_conf_replSet: rs0
- mongodb_conf_bind_ip: 192.168.111.11
tasks:
- name: Waiting for port to be available
wait_for: host=192.168.111.11 port=27017 delay=10 state=drained timeout=160
- name: Initialising replica set in mongo
command: mongo 192.168.111.11:27017 --eval "rs.initiate()"
But again that same error:
==> mongo-test: TASK [Waiting for port to be available] ****************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:11
==> mongo-test: ok: [127.0.0.1] => {"changed": false, "elapsed": 10, "path": null, "port": 27017, "search_regex": null, "state": "drained"}
==> mongo-test:
==> mongo-test: TASK [Initialising replica set in mongo] ***************************************
==> mongo-test: task path: /vagrant/provisioning/mongo_playbook.yml:13
==> mongo-test: fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["mongo", "192.168.111.11:27017", "--eval", "rs.initiate()"], "delta": "0:00:00.092468", "end": "2016-03-04 21:38:02.597946", "failed": true, "rc": 1, "start": "2016-03-04 21:38:02.505478", "stderr": "exception: connect failed", "stdout": "MongoDB shell version: 2.6.11\nconnecting to: 192.168.111.11:27017/test\n2016-03-04T21:38:02.592+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused\n2016-03-04T21:38:02.593+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148", "stdout_lines": ["MongoDB shell version: 2.6.11", "connecting to: 192.168.111.11:27017/test", "2016-03-04T21:38:02.592+0000 warning: Failed to connect to 192.168.111.11:27017, reason: errno:111 Connection refused", "2016-03-04T21:38:02.593+0000 Error: couldn't connect to server 192.168.111.11:27017 (192.168.111.11), connection attempt failed at src/mongo/shell/mongo.js:148"], "warnings": []}
==> mongo-test:
==> mongo-test: NO MORE HOSTS LEFT *************************************************************
==> mongo-test:
==> mongo-test: RUNNING HANDLER [Stouts.mongodb : mongodb restart] *****************************
==> mongo-test: to retry, use: --limit #mongo_playbook.retry
==> mongo-test:
==> mongo-test: PLAY RECAP *********************************************************************
==> mongo-test: 127.0.0.1 : ok=16 changed=8 unreachable=0 failed=1
I am also adding my mongod.conf:
auth = False
bind_ip = 192.168.111.11
cpu = True
dbpath = /data/db
fork = False
httpinterface = False
ipv6 = False
journal = False
logappend = True
logpath = /var/log/mongodb/mongod.log
maxConns = 1000000
noprealloc = False
noscripting = False
notablescan = False
port = 27017
quota = False
quotaFiles = 8
syslog = False
smallfiles = False
# Replica set options:
replSet = rs0
replIndexPrefetch = all

If you know the sequence of reboots for mongodb you can do something like this
- name: wait for service, shutdown, and service
wait_for: state={{item}} port=7777
with_items:
- present
- drained
- present

Related

Mongo ECONNREFUSED Error: Using docker-compose to spin up a container with a backend app and a container with a mongo db leads to error

I have a custom made docker image for the backend of my app. I have a yaml file that runs my app image and a mongo image. However, when I use docker-compose on the yml file, I get the following error (about 20 seconds and the containers start running):
(node:33) [MONGOOSE] DeprecationWarning: Mongoose: the `strictQuery` option will be switched back to `false` by default in Mongoose 7. Use `mongoose.set('strictQuery', false);` if you want to prepare for this change. Or use `mongoose.set('strictQuery', true);` to suppress this warning.
(Use `node --trace-deprecation ...` to show where the warning was created)
Server listening on port 3000
/cloudband/node_modules/mongoose/lib/connection.js:825
const serverSelectionError = new ServerSelectionError();
^
MongooseServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
at Connection.openUri (/cloudband/node_modules/mongoose/lib/connection.js:825:32)
at /cloudband/node_modules/mongoose/lib/index.js:409:10
at /cloudband/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5
at new Promise (<anonymous>)
at promiseOrCallback (/cloudband/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)
at Mongoose._promiseOrCallback (/cloudband/node_modules/mongoose/lib/index.js:1262:10)
at Mongoose.connect (/cloudband/node_modules/mongoose/lib/index.js:408:20)
at Object.<anonymous> (/cloudband/server/server.js:15:4)
at Module._compile (node:internal/modules/cjs/loader:1239:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1293:10) {
reason: TopologyDescription {
type: 'Unknown',
servers: Map(1) {
'localhost:27017' => ServerDescription {
address: 'localhost:27017',
type: 'Unknown',
hosts: [],
passives: [],
arbiters: [],
tags: {},
minWireVersion: 0,
maxWireVersion: 0,
roundTripTime: -1,
lastUpdateTime: 28094812,
lastWriteDate: 0,
error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017
at connectionFailureError (/cloudband/node_modules/mongodb/lib/cmap/connect.js:387:20)
at Socket.<anonymous> (/cloudband/node_modules/mongodb/lib/cmap/connect.js:310:22)
at Object.onceWrapper (node:events:628:26)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
cause: Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 27017
},
[Symbol(errorLabels)]: Set(1) { 'ResetPool' }
},
topologyVersion: null,
setName: null,
setVersion: null,
electionId: null,
logicalSessionTimeoutMinutes: null,
primary: null,
me: null,
'$clusterTime': null
}
},
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
setName: null,
maxElectionId: null,
maxSetVersion: null,
commonWireVersion: 0,
logicalSessionTimeoutMinutes: null
},
code: undefined
}
Here are my files:
Dockerfile:
FROM node:19.4.0
WORKDIR /cloudband
COPY package.json /cloudband/
COPY package-lock.json /cloudband/
RUN npm ci
COPY .env /cloudband/
COPY server /cloudband/server/
EXPOSE 3000
CMD ["npm", "run", "dev:server"]
YAML file:
version: '3'
services:
mongo:
image: mongo
container_name: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
cloudband:
image: cloudband
container_name: cloudband
ports:
- 3000:3000
command: npm run dev:server
networks:
app:
I expected my application and mongo db to start running in their respective containers and for them to be able to communicate (i.e. create documents / find documents / etc.).
What I have already tried:
-making sure they are in the same network (they are)
-making sure they can ping each other (they can)
-adding links to my app in the yaml file
-checked configurations and i think they are ok (port, host, ip)
-switching my uri to the following things:
# MONGO_URI_=mongodb://admin:password#localhost:27017/dbname
MONGO_URI_=mongodb://localhost:27017/dbname
# MONGO_URI_=mongodb://127.0.0.1:27017/dbname
Things to consider:
node v18.12.0 is installed on my computer
In a container, localhost means the container itself.
Docker-compose creates a docker network where the containers can talk to each other using their service name or container names as host names.
So, instead of
MONGO_URI_=mongodb://localhost:27017/dbname
you need to use
MONGO_URI_=mongodb://mongo:27017/dbname

Why can't mongomirror initiate connection to the destination source? [server selection timeout]

I want to migrate a database from one cluster to another. The source and destination are withing the same VPC, but I keep getting Unknown, Last error: connection() error occured during connection handshake error.
mongomirror --host "Replicaset-Name/host-0.domain.com:27017,host-1.domain.com:27017,host-2.domain.com:27017" \
--username "<username>" \
--password "<password>" \
--authenticationDatabase "admin" \
--destination "Replicaset-Name/mongodb-0.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-svc.svc.cluster.local:27017" \
--destinationUsername "<username>" \
--destinationPassword "<password>"
Output:
mongomirror version: 0.12.5
git version: 6e5a5489944845758420e8762dd5e5a89d2e8654
Go version: go1.16.9
os: linux
arch: amd64
compiler: gc
2022-08-16T16:53:29.353+0000 Source isMaster output: {IsMaster:true MinWireVersion:0 MaxWireVersion:7 Hosts:[host-0.domain.com:27017 host-1.domain.com:27017 host-2.domain.com:27017}
2022-08-16T16:53:29.353+0000 Source buildInfo output: {Version:4.0.12 VersionArray:[4 0 12 0] GitVersion:5776e3cbf9e7afe86e6b29e22520ffb6766e95d4 OpenSSLVersion: SysInfo: Bits:64 Debug:false MaxObjectSize:16777216}
2022-08-16T16:55:29.356+0000 Error initializing mongomirror: could not initialize destination connection: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-1.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, { Addr: mongo-0.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, { Addr: mongo-2.mongo-svc.mongodb.svc.cluster.local:27017, Type: Unknown, Last error: connection() error occured during connection handshake: EOF }, ] }
I've tested the connection with mongosh. It has no issues while establishing the connection.
mongosh --host mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017 --username <username>
So what might I have missed here?
Thanks!

ERROR Configuring mongoDB using Ansible (MongoNetworkError: connect ECONNREFUSED)

I'm trying to configure a replicaset of mongodb using ansible,
I succeeded to install mongoDB on the primary server and created the replica-set configuration file except when I launch the playbook, I get an error of type: MongoNetworkError: connect ECONNREFUSED 3.142.150.62:28041
Does anyone have an idea please how to solve this?
attached, the playbook and the error on the Jenkins console
Playbook:
---
- name: Play1
hosts: hhe
#connection: local
become: true
#remote_user: ec2-user
#remote_user: root
tasks:
- name: Install gnupg
package:
name: gnupg
state: present
- name: Import the public key used by the package management system
shell: wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
- name: Create a list file for MongoDB
shell: echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
- name: Reload local package database
command: sudo apt-get update
- name: Installation of mongodb-org
package:
name: mongodb-org
state: present
update_cache: yes
- name: Start mongodb
service:
name: mongod
state: started
enabled: yes
- name: Play2
hosts: hhe
become: true
tasks:
- name: create directories on all the EC2 instances
shell: mkdir -p replicaset/member
- name: Play3
hosts: secondary1
become: true
tasks:
- name: Start mongoDB with the following command on secondary1
shell: nohup mongod --port 28042 --bind_ip localhost,ec2-18-191-39-71.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play4
hosts: secondary2
become: true
tasks:
- name: Start mongoDB with the following command on secondary2
shell: nohup mongod --port 28043 --bind_ip localhost,ec2-18-221-31-81.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play5
hosts: arbiter
become: true
tasks:
- name: Start mongoDB with the following command on arbiter
shell: nohup mongod --port 27018 --bind_ip localhost,ec2-13-58-35-255.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play6
hosts: primary
become: true
tasks:
- name: Start mongoDB with the following command on primary
shell: nohup mongod --port 28041 --bind_ip localhost,ec2-3-142-150-62.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Create replicaset initialize file
copy:
dest: /tmp/replicaset_conf.js
mode: "u=rw,g=r,o=rwx"
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
- name: Pause for a while
pause: seconds=20
- name: Initialize the replicaset
shell: mongo /tmp/replicaset_conf.js
The error on Jenkins Consol:
PLAY [Play6] *******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [primary]
TASK [Start mongoDB with the following command on primary] *********************
changed: [primary]
TASK [Create replicaset initialize file] ***************************************
ok: [primary]
TASK [Pause for a while] *******************************************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [primary]
TASK [Initialize the replicaset] ***********************************************
fatal: [primary]: FAILED! => {"changed": true, "cmd": "/usr/bin/mongo 3.142.150.62:28041 /tmp/replicaset_conf.js", "delta": "0:00:00.146406", "end": "2022-08-11 09:46:07.195269", "msg": "non-zero return code", "rc": 1, "start": "2022-08-11 09:46:07.048863", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version v5.0.10\nconnecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:372:17\n#(connect):2:6\nexception: connect failed\nexiting with code 1", "stdout_lines": ["MongoDB shell version v5.0.10", "connecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb", "Error: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :", "connect#src/mongo/shell/mongo.js:372:17", "#(connect):2:6", "exception: connect failed", "exiting with code 1"]}
You start the service already with
service:
name: mongod
state: started
enabled: yes
thus shell: nohup mongod ... & is pointless. You cannot start the mongod service multiple times, unless you use different port and dbPath. You should prefer to start the mongod as service, i.e. systemctl start mongod or similar instead of nohup mongod ... &. I prefer to use the configuration file (typically /etc/mongod.conf) rather than command line options.
Plain mongo command uses the default port 27017, i.e. it does not connect to the MongoDB instances you started in above task.
You should wait till replica set is initated. You can do it like this:
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
You configured an ARBITER. However, an arbiter node is useful only with an even number of Replica Set members. With 3 members it does not make much sense. Anyway, you don't add the arbiter to your Replica Set, so what is the reason to define it?
Just a note, you don't have to create a temp file, you can execute script directly, e.g. similar to this:
shell:
cmd: mongo --eval '{{ script }}'
executable: /bin/bash
vars:
script: |
var cfg =
{
"_id" : "replica_demo",
...
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
print(rs.status().ok)
register: ret
failed_when: ret.stdout_lines | last != "1"
Be aware of correct quoting.

how openstack remove offline host node by kolla-ansible

I have an offline host node which includes (compute node, control node and storage node). This host node was shutdown by accident and can't recover to online. All services about that node are down and enable but I can't set to disable.
So I can't remove the host by:
kolla-ansible -i multinode stop --yes-i-really-really-mean-it --limit node-17
I get this error:
TASK [Gather facts] ********************************************************************************************************************************************************************************************************************
fatal: [node-17]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host node-17 port 22: Connection timed out", "unreachable": true}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
node-17 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can I remove that offline host node? THX.
PS: Why I remove that offline host?
node-14(online) : **manage node which kolla-ansible installed**; compute node, control node and storage node
node-15(online) : compute node, control node and storage node
node-17(offline) : compute node, control node and storage node
osc99 (adding) : compute node, control node and storage node
Because when I deploy a new host(osc99) with (the multinode file had comment the node-17 line):
kolla-ansible -i multinode deploy --limit osc99
kolla-ansible will report error:
TASK [keystone : include_tasks] ********************************************************************************************************************************************************************************************************
included: .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml for osc99
TASK [keystone : Waiting for Keystone SSH port to be UP] *******************************************************************************************************************************************************************************
ok: [osc99]
TASK [keystone : Initialise fernet key authentication] *********************************************************************************************************************************************************************************
ok: [osc99 -> node-14]
TASK [keystone : Run key distribution] *************************************************************************************************************************************************************************************************
fatal: [osc99 -> node-14]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "-t", "keystone_fernet", "/usr/bin/fernet-push.sh"], "delta": "0:00:04.006900", "end": "2021-07-12 10:14:05.217609", "msg": "non-zero return code", "rc": 255, "start": "2021-07-12 10:14:01.210709", "stderr": "", "stderr_lines": [], "stdout": "Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.\r\r\nssh: connect to host node.17 port 8023: No route to host\r\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\r\nrsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]", "stdout_lines": ["Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.", "", "ssh: connect to host node.17 port 8023: No route to host", "", "rsync: connection unexpectedly closed (0 bytes received so far) [sender]", "rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]"]}
NO MORE HOSTS LEFT *********************************************************************************************************************************************************************************************************************
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
osc99 : ok=120 changed=55 unreachable=0 failed=1 skipped=31 rescued=0 ignored=1
How could I fixed this error, this is the main point whether or not I can remove the offline host.
Maybe I could fixed that by change the init_fernet.yml file:
node-14:~$ cat .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml
---
- name: Waiting for Keystone SSH port to be UP
wait_for:
host: "{{ api_interface_address }}"
port: "{{ keystone_ssh_port }}"
connect_timeout: 1
register: check_keystone_ssh_port
until: check_keystone_ssh_port is success
retries: 10
delay: 5
- name: Initialise fernet key authentication
become: true
command: "docker exec -t keystone_fernet kolla_keystone_bootstrap {{ keystone_username }} {{ keystone_groupname }}"
register: fernet_create
changed_when: fernet_create.stdout.find('localhost | SUCCESS => ') != -1 and (fernet_create.stdout.split('localhost | SUCCESS => ')[1]|from_json).changed
until: fernet_create.stdout.split()[2] == 'SUCCESS' or fernet_create.stdout.find('Key repository is already initialized') != -1
retries: 10
delay: 5
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
- name: Run key distribution
become: true
command: docker exec -t keystone_fernet /usr/bin/fernet-push.sh
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
by changing the delegate_to: "{{ groups['keystone'][0] }}? But I can't implement that.

Having issues with rs.add() in ansible playbook for mongo

I am using below tasks in my playbook to initialize cluster and add secondary to primary:
- name: Initialize replica set
run_once: true
delegate_to: host1
shell: >
mongo --eval 'printjson(rs.initiate())'
- name: Format secondaries
run_once: true
local_action:
module: debug
msg: '"{{ item }}:27017"'
with_items: ['host2', 'host3']
register: secondaries
- name: Add secondaries
run_once: true
delegate_to: host1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item.msg }}))'
with_items: secondaries.results
I am getting below error:
TASK [mongodb-setup : Add secondaries] *******************************
fatal: [host1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'msg'\n\nThe error appears to have been in '/var/lib/awx/projects/_dev/roles/mongodb-setup/tasks/users.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Add secondaries\n ^ here\n"}
Thanks for the response, I have amended my code as below
- name: Add secondaries
run_once: true
delegate_to: host-1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item }}:27017))'
with_items:
- host2
- host3
but getting below error
failed: [host-2 -> host-1] (item=host-2) => {"changed": true, "cmd": "/usr/bin/mongo --eval 'printjson(rs.add(host-2:27017))'", "delta": "0:00:00.173077", "end": "2019-08-06 13:29:09.422560", "item": "host-2", "msg": "non-zero return code", "rc": 252, "start": "2019-08-06 13:29:09.249483", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version: 3.2.22\nconnecting to: test\n2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37", "stdout_lines": ["MongoDB shell version: 3.2.22", "connecting to: test", "2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37"]}
You issue is not with rs.add() but with the data you loop over. In your last task, your item list is a single string.
# Wrong #
with_items: secondaries.results
You want to pass an actual list form your previously registered result:
with_items: "{{ secondaries.results }}"
That being said, registering the result of a debug task is rather odd. You should use set_fact to register what you need in a var, or better directly loop other your list of hosts in your task. It also looks like the rs.add funcion is exepecting a string so you should quote the argument in your eval. Something like:
- name: Add secondaries
shell: >
/usr/bin/mongo --eval 'printjson(rs.add("{{ item }}:27017"))'
with_items:
- host2
- host3
And the way you use delegation seems rather strange to me in this context but it's hard to give any valid clues without a complete playbook example of what you are trying to do (that you might give in a new question if necessary).