Distributed Drill Won't Start: "Drillbit is disallowed to bind to loopback address in distributed mode." - apache-zookeeper

I have (3) CentOS8 VirtualBox VM's, with networking enabled and each with 16 GB RAM allocated. Each has /etc/hosts configured as such:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
I am running a Zookeeper Quorum across the VMs with the following config:
#/home/strickolas/Downloads/project/zookeeper/conf/zoo.cfg
initLimit=10
syncLimit=5
clientPort=2181
tickTime=2000
dataDir=/home/strickolas/Downloads/project/zookeeper/data
reconfigEnabled=true
standaloneEnabled=false
server.1=192.168.0.4:2888:3888
server.2=192.168.0.5:2888:3888
server.3=192.168.0.6:2888:3888
I killed the firewall using sudo systemctl stop firewalld. It was causing me problems with running ZK. I will eventually need to config the firewall properly, but for now, that's a future-me problem.
Doing a ./bin/zkServer.sh status says that ZK is running as it should be:
Zookeeper JMX enabled by default
Using config /home/strickolas/Downloads/project/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address:localhost.
Mode: leader # or follower depending on which node you ask.
Now for my Drill setup:
#/home/strickolas/Downloads/project/drill/conf/drill-override.conf
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "192.168.0.4:2181,192.168.0.5:2181,192.168.0.6:2181"
}
Running ./bin/drillbit.sh run results in:
Tue Mar 31: 23:59:16 EDT 2020 Starting drillbit on localhost.localdomain
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63366
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63366
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Exception in thread "main" org.apache.drill.exec.exception.DrillbitStartupException: Failure during initial startup of Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:550)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:546)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: Drillbit is disallowed to bind to loopback address in distributed mode.
at org.apache.drill.exec.service.ServiceEngine.start(ServiceEngine.java:97)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:580)
... 2 more
Any idea?

You need to create an entry in /etc/hosts in each of the hosts with their IP address and their hostname (e.g. "192.168.0.4 drillbit1" for node 1, "192.168.0.5 drillbit2" for node 2, and "192.168.0.6 drillbit3" for node 3).

Related

How to check how namy connections has mongodb server and waht limit present in conf and waht default?

I have mongodb 4.4.0 and symfony project with queue and a lot of consumers, I faced with some problem, when many consumer executing jobs, saving a lot of products in db, server CPU loading fast inrease. I want to check how many connection I have at some moment, investigated doc I found info aobut limitation connections and this is equal 64k, this mean 64 000, correct ? But I want to understand how many connections opened at some moment for that I executed this command in shell
db.serverStatus().connections
But faced with empty result, original result lokks like this - script executed successfull but there are no result. Why? And how to check ? I tested it on machine where present mongo - 4.15.0-117-generic.
in machine where I executed docker-compose up for mongobd imgs
$ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 193076
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 16384
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 193076
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
and exactlry in mongodb container
root#mongodb:/# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 193076
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
but strange case when my consumers worked and query db.serverStatus().connections retun empty result.
Actually I faced with slowly speed when my consumers instert many rows in documents, first what I did, this is use transaction and this is help, but after some time I faced with slow speed again, right now I thinking about sharding my mongodb, could it approach to resolve problem with speed when many consumers execute instert a lot of rows in documents ?
UPDATE
I ececuted this command in shell directly and faced with the same empty result
> db.serverStatus().connections
> db
test
> db
test
> use symfony
switched to db symfony
> db.serverStatus().connections
> db.serverStatus().connections
>

Mlock error at mongo console opening - failed to mlock: Cannot allocate locked memory

I have just tried to install MongoDB on a fresh Ubuntu 18 machine.
For this I went through the tutorial from the website.
Everything went fine - including starting the server with 
sudo systemctl start mongod 
and checking that it runs with:
sudo systemctl status mongod
Only I can't seem to start a mongo console. When I type mongo, I get the following error:
2020-07-17T13:26:48.049+0000 F - [main] Failed to mlock: Cannot allocate locked memory. For more details see: https://dochub.mongodb.org/core/cannot-allocate-locked-memory: Operation not permitted
2020-07-17T13:26:48.049+0000 F - [main] Fatal Assertion 28832 at src/mongo/base/secure_allocator.cpp 255
2020-07-17T13:26:48.049+0000 F - [main]
***aborting after fassert() failure
I checked for the suggested link but there seems to be no limitation problem as resources are not limited (as per check with ulimit). Machine has 16Gb RAM. Any idea what the problem/solution might be?
EDIT: the process limits are:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 64000 64000 processes
Max open files 64000 64000 files
Max locked memory unlimited unlimited bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 62761 62761 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
I was getting that exact error and that linked mongodb page wasn't helpful for me either. I'm running on FreeBSD and found a useful bit of detail in a bug report for the port. It turns out a system level resource limit was the underlying problem. On FreeBSD, the key are these two sysctl settings:
sysctl vm.stats.vm.v_wire_count vm.max_wired
v_wire_count should be less than max_wired. Increasing max_wired solved the issue for me.
If you use some sort of virtualization to deploy your machine, you need to make sure that the #memlock system calls are allowed. For example, for systemd-nspawn, check this answer:https://stackoverflow.com/a/69286781/16085315
I just had this issue on my FreeBSD VM with mongodb following solved my issue as mentioned previously :
# sysctl vm.stats.vm.v_wire_count vm.max_wired
vm.stats.vm.v_wire_count: 1072281
vm.max_wired: 411615
# sysctl -w vm.max_wired=1400000
vm.max_wired: 411615 -> 1400000
# service mongod restart
Stopping mongod.
Waiting for PIDS: 36308.
Starting mongod.
Add the value for a long term set in /etc/sysctl.conf
vm.max_wired=1400000
v_wire_count should be less than max_wired

MongoDB performing poor with better configuration machine

We have two mongo servers one for testing and one for production each of them have a collection named images with ~700M documents.
{
_id
MovieId
...
}
We have the index on _id and MovieId
We are running the queries of the following format
db.images.find({MovieId:1234})
QA Config:
256GB of RAM with RAID disk
Prod Config:
700GB of RAM with SSD mirror
mongod configuration (/etc/mongod.conf)
QA:
storage:
dbPath: "/data/mongodb_data"
journal:
enabled: false
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 256
setParameter:
wiredTigerConcurrentReadTransactions: 256
Prod:
storage:
dbPath: "/data/mongodb_data"
directoryPerDB: true
journal:
enabled: false
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 600
setParameter:
wiredTigerConcurrentReadTransactions: 256
With the better configuration for the prod server, it should perform better than QA server. Surprisingly it is running very slow compared to QA Server.
I checked current ops (using db.currentOp()) on both servers under the same load, lot of queries on the prod server takes 10-20 seconds, but on the QA server No query takes more than 1 second.
The queries are initiated from Mapreduce jobs.
I need help in identifying the problem.
[Edit]: Mongo Version 3.0.11
You can debug your mongo queries in multiple ways.
Start with your index usage using below command :
db.images.aggregate( [ { $indexStats: { } } ] )
If this don't give you any useful information, then check the execution plan of the slow queries using :
db.setProfilingLevel(2)
db.system.profile.find().pretty()
db.system.profile will give you complete profile of your queries.
There was a difference in the number of open files and, max user processes between our Staging and Production servers. I checked it by using the command ulimit -a
Staging:
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 32768
Prod:
open files (-n) 16384
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) *16384*
After I changed the two settings on prod, it started giving better performance. Thanks #Gaurav for advising me on this.

MongoDB Install Simple Test Ops Manager java.lang.OutOfMemoryError on startup

I just installed a test evaluation of MongoDB Ops Manager and get an error on startup of the Backup HTTP server:
Migrate MMS data
Running migrations...[ OK ]
Start MMS server
Instance 0 starting..........[ OK ]
Start Backup HTTP Server
Instance 0 starting.......[FAILED]
2015-05-07T14:00:32.107+0000 [main] gid ERROR ServerMain:199 - Cannot start bslurp server [FATAL-EXITING] - instance: 0 - msg: unable to create new native thread
java.lang.OutOfMemoryError: unable to create new native thread
I appear to have plenty of memory
[root#krh60621 ~]# free -m
total used free shared buffers cached
Mem: 15951 4588 11362 0 364 2021
and I upped the max processes to unlimited to see if that would help....
[root#krh60621 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127421
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 94000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root#krh60621 ~]# ps -eLF| grep -c java
593
[root#krh60621 ~]# ps -eLF| wc -l
1031
Any thoughts???
I encountered a similar issue in our Test Ops Manager deployment when we upgraded to Ops Manager 1.8.0. I ultimately opened up a ticket with MongoDB Support and this was the resolution for our issue:
The Ops Manager components are launched using the default username "mongodb-mms". Please adjust the ulimit settings for this user to match those of the "mongodb" user, currently defined in /etc/security/limits.d/99-mongodb-mms-automation-agent.conf.
You may wish to add a separate file under /etc/security/limits.d/ for the mongodb-mms user.
More information can be found here.

How many type of socket limit? What is their difference?

I would like to know how many type of socket limit?
Is it just SOCK_STREAM and SOCK_DGRAM?
i have try ulimit -a. I am just wondering is all consider socket?
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 11716
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 11716
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I can't speak for CentOS, but in general other platforms do define additional socket types, such as SOCK_RAW, SOCK_RDM, and SOCK_SEQPACKET