Service unavailable error while using MongoDB, ElasticSearch and transporter - mongodb

I am trying to use the transporter plugin to create a pipeline to sync a MongoDB database and ElasticSearch. I am using a Linux virtual machine (ubuntu) for this.
I have created a MongoDB collection my_application with the following data in it:
db.users.find().pretty();
{
"_id" : ObjectId("6008153cf979ac0f18681765"),
"firstName" : "Sammy",
"lastName" : "Shark"
}
{
"_id" : ObjectId("60081544f979ac0f18681766"),
"firstName" : "Gilly",
"lastName" : "Glowfish"
}
I configured ElasticSearch and the transporter pipeline and now exported MongoDB_URI and Elastic_URI.
I then ran my transporter pipeline.js to obtain this:
INFO[0005] metrics source records: 2 path=source ts=1611154492641006368
INFO[0005] metrics source/sink records: 2 path="source/sink" ts=1611154492641013556
I then try to view my ElasticSearch but get this error:
curl $ELASTICSEARCH_URI/_search?pretty=true
{
"error" : {
"root_cause" : [
{
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
}
],
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
},
"status" : 503
}
Here is my elasticsearch.yml:
# Use a descriptive name for the node:
node.name: node-1
path.data: /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1", "node-2"]
Here is my elasticsearch node:
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
"build_date" : "2020-05-28T16:30:01.040088Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have tried deleting indices and restarting the server but the error repeats. Would like to know the solution to this. I am using elasticsearch 7.10

Related

Mongos instance can't communicate with the database

So I have a sharded cluster with 2 config servers, 2 shards each with 2 replicas and 2 mongos instances, everything running on different VMs.
However, after configuring all of it, I finally tried to interact with the database which is empty with a simple show dbs query from the mongos instance, but it threw me the following error (after thinking for like 1 min):
uncaught exception: Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set rep",
"code" : 133,
"codeName" : "FailedToSatisfyReadPreference",
"operationTime" : Timestamp(1648722327, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1648722327, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Everything seems to be well configured and when I do sh.status() from the mongos instance it identifies the shards and replicas as such:
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("62421dd6b5f9640f309faca0")
}
shards:
{ "_id" : "rep", "host" : "rep/192.168.86.136:26000,192.168.86.141:26001", "state" : 1 }
{ "_id" : "repb", "host" : "repb/192.168.86.142:26002,192.168.86.143:26003", "state" : 1 }
active mongoses:
"4.4.8" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Empty host component parsing HostAndPort from ""
Time of Reported error: Thu Mar 31 2022 11:06:39 GMT+0100 (WEST)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rep 919
repb 105
too many chunks to print, use verbose if you want to force print
{ "_id" : "testdb", "primary" : "rep", "partitioned" : false, "version" : { "uuid" : UUID("2e584dcd-25ea-4ba4-805c-b40928e26511"), "lastMod" : 1 } }
Maybe a firewall issue.
Every node in your cluster must be able to reach any other node via according port. See
Simple HTTP/TCP health check for MongoDB
Try this script to check each member of each replica set:
const MONGO_PASSWROD = '*******'
const AUTH_SOURCE = 'admin'
const user = db.runCommand({ connectionStatus: 1 }).authInfo.authenticatedUsers.shift().user;
const map = db.adminCommand("getShardMap").map;
for (let rs of Object.keys(map)) {
let uri = map[rs].split("/");
let connectionString = `mongodb://${user}:${MONGO_PASSWROD}#${uri[1]}/admin?replicaSet=${uri[0]}&authSource=${AUTH_SOURCE}`;
let replicaSet = Mongo(connectionString).getDB("admin");
for (let member of replicaSet.adminCommand({ replSetGetStatus: 1 }).members) {
if (!replicaSet.hello().hosts.includes(member.name)) continue;
printjsononeline({ replicaSet: rs, host: member.name, stateStr: member.stateStr, health: member.health });
if (member.health != 1 || !Array("PRIMARY", "SECONDARY").includes(member.stateStr))
print(`Member state of ${member.name} is '${member.stateStr}'`);
}
}
Turns out I configured the replica set wrongly, so all I had to do was recreate the volumes of all VMs and configure it all again from scratch. Now it works as it should.

Find out user who created database/collection in MongoDB

I have so many applications on my server who are using mongo db. I want to find out the user name which is being used to create specific db/collection.
I see some application is malfunctioning and keeps on creating dbs dynamically. I want to find out the application through the user which is being used.
What i have done so far is, i found out the connection information from the mongodb logs by grepping that database and then ran this query,
db.currentOp(true).inprog.forEach(function(o){if(o.connectionId == 502925 ) printjson(o)});
And this is the result i am getting,
{
"host" : "abcd-server:27017",
"desc" : "conn502925",
"connectionId" : 502925,
"client" : "127.0.0.1:39266",
"clientMetadata" : {
"driver" : {
"name" : "mongo-java-driver",
"version" : "3.6.4"
},
"os" : {
"type" : "Linux",
"name" : "Linux",
"architecture" : "amd64",
"version" : "3.10.0-862.14.4.el7.x86_64"
},
"platform" : "Java/AdoptOpenJDK/1.8.0_212-b03"
},
"active" : false,
"currentOpTime" : "2019-07-02T07:31:39.518-0500"
}
Please let me know if there is any way to find out the user.

Error when try to connect to Replica Set in Mongo

When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.

exception: hosts cannot switch between localhost and hostname

I created a replication set.
I added localhost in the set in the beginning, but when I try to edit the member with the actual hostname. I get error "exception: hosts cannot switch between localhost and hostname"
I need to get rid of localhost:27017 because, otherwise, it doesn't let me enter any other member as hostname (i.e. non-localhost address)
my-rs0:PRIMARY> cfg=rs.conf();
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}
my-rs0:PRIMARY> cfg.members[0].host="my-server04:27017"
my-rs0:PRIMARY> cfg
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "my-server04:27017"
}
]
}
using rs.reconfig(cfg);
my-rs0:PRIMARY> rs.reconfig(cfg);
{
"errmsg" : "exception: hosts cannot switch between localhost and hostname",
"code" : 13645,
"ok" : 0
}
no luck with rs.add("my-server04:27017") or rs.remove("localhost:27017") as well.
my-rs0:PRIMARY> rs.add("my-server04:27017");
{
"errmsg" : "exception: can't use localhost in repl set member names except when using it for all members",
"code" : 13393,
"ok" : 0
}
I have tried all the reconfiguration methods mentioned here Replica Set Reconfig steps
But, none fixing above issue. Already spent hours, I am really frustrated.
I had the same problem and I fixed it without dropping any database. Just edited the host field of the member in the local.system.replset collection to match the local ip and then restarted mongod. Everything worked perfect.
It looks like you'll need to scrap your replicaset and start over.
I believe that when you initiated your Replica Set, you explicitly passed it a config document that references your MongoDB instance using localhost.
As I was investigating this, I brought up a replica set. When I initiated the replica set using rs.initiate() (without passing a config document) it used host name by default.
rs.initiate()
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "MY-HOSTNAME:28001"
}
]
}
This post describes the need to complete clear out your database files to create a fresh replica set.
Once I did this, I initiated a new replica set in the by passing a configuration document:
cfg = {
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
rs.initiate(cfg)
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
Long story short, you'll need to delete all of the files in your --dbpath directory and re-create the replica set, without explicitly specifying "localhost" as your hostname.
I did according to the docs:
Restarted MongDB on another port (e.g. 37107) to prevent user connections to it.
Then started a shell on it:
$ mongo --port 37017
Then updated the configuration:
use local
cfg = db.system.replset.findOne( { "_id": "my-rs0" } )
cfg.members[0].host = "my-server04:27017"
db.system.replset.update( { "_id": "my-rs0" } , cfg )
Then restarted MongoDB on the original port.

mongodb replicaset host name change error

I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.