Not Able Disable Dynamic Reconfig for Zookeeper - apache-zookeeper

I have three ZK nodes and each node has Solr running on it respectively.
I keep having this error: Errors: Your ZK connection string (3 hosts) is different from the dynamic ensemble config (3 hosts). Solr does not currently support dynamic reconfiguration and will only be able to connect to the zk hosts in your connection string.
I tried many ways to disable it, but unsucessfully.
Here's my zoo.cfg:
reconfigEnabled=false
standaloneEnabled=false
server.1=172.32.24.47:2888:3888
server.2=172.32.27.110:2888:3888
server.3=0.0.0.0:2888:3888
4lw.commands.whitelist=mntr,conf,ruok

Related

Expecting replica set member, but found a STANDALONE. Removing x.x.x.x:27017 from client view of cluster

Not able to connect to mongo standalone node experiencing below error.
ERROR [cluster-ClusterId{value='asdfs', description='null'}-x.x.x.x:27017] org.mongodb.driver.cluster - Expecting replica set member, but found a STANDALONE. Removing x.x.x.x:27017 from client view of cluster.
Is it okay to give multiple Ip's in config file while only one mongo node is there?
Is it okay to give multiple Ip's in config file while only one mongo node is there?
Not for standalone connections, no.
Specify the address of your standalone only.

using mongoclient with multi mongos to connect to a sharded replica mongo cluster

In a regular sharded replica cluster, it consists of 10 mongos, 5 config servers and 10 shards. I use mongo client to connect to multiple mongos instances.
I have two questions.
The first question: What is load policy in this situation? Is it round-robin scheduing?
The second one: What if one onf the mongos instances is down, what is the move that mongoclient would take? Will it still connect to this mongos instance or drop this one from the list.
Please help with these.thanks
The mongos servers provide a routing service to direct read/write queries to the appropriate shard(s).
You are specifying multiple mongos's to connect to the MongoDB sharded cluster. An available mongos will be used to connect to the server.
The first question: What is load policy in this situation? Is it
round-robin scheduling?
The client will connect to the server with an available mongos. There is no "load policy" and there is no round-robin scheduling. You use multiple mongos's for high availability.
See: Number of mongos and Distribution
The second one: What if one of the mongos instances is down, what is
the move that mongoclient would take? Will it still connect to this
mongos instance or drop this one from the list.
If a mongos is down, the client will connect to the server using another available mongos from the list (you have more than one mongos to connect with).

MongoDB nodes (AWS EC2 Instances) are still responsive even after network partitioning done using Security Groups

I have created a MongoDB replica set using 5 EC2 instances on AWS. I added the nodes using rs.add("[IP_Address]") command.
I want to perform network partition in the replica set. In order to that, I have specified 2 kinds of security groups. 'SG1' has 27017 port (MongoDB port) opened. 'SG2' doesn't expose 27017.
I want to isolate 2 nodes from the replica set. When I apply SG2 on these 2 nodes (EC2 instances), ideally they should stop getting write and read from the primary as I am blocking the 27017 port using security group SG2. But in my case, they are still writable. Data written on Primary reflects on the partitioned node. Can someone help? TYA.
Most firewalls, including AWS Security groups, will block incoming connections when the connection is being opened. Changing settings will affect all new connection, but existing open connections are not re-evaluated when they are applied.
MongoDB maintains connections between hosts and that would only get blocked after loss of connection between the hosts.
On Linux you can restart the networking which will reset the connections. You can do this after applying the new rules by running:
/etc/init.d/networking stop && /etc/init.d/networking start

mongodb client libraries fail to connect to replica set

Using recent client libraries (pymongo 3.4, mongodb (nodejs) 2.2.27), I am having trouble connecting to my mongodb servers with replication.
The replicaset configuration contains either the internal ips of the servers or the hostnames. I'm getting the following error:
pymongo.errors.ServerSelectionTimeoutError: mongodbdriver20151129-arbiter-1:27017: [Errno 8] nodename nor servname provided, or not known,mongodbdriver20151129-instance-1:27017: [Errno 8] nodename nor servname provided, or not known,mongodbdriver20151129-instance-2:27017: [Errno 8] nodename nor servname provided, or not known
or
pymongo.errors.ServerSelectionTimeoutError: 10.0.0.5:27017: timed out,10.0.0.6:27017: timed out,10.0.0.4:27017: timed out
I am currently working around it by changing the replicaset config to contain the external ips for the servers but I guess that would slow down the inter-server communication. How can I connect to my servers from an external location with the original rsconf?
[update] Note: I am trying to connect to the external ip of the servers and this worked fine when using pymongo 2.8 or mongodb (js) 2.1.4
[update] Follow this chat for more details/examples
Later versions of all officially supported MongoDB drivers (including the node driver) follows the Server Discovery and Monitoring spec (SDAM), which mandates that all drivers to monitor all nodes in a replica set (see Monitoring).
The reason for this monitoring is to be able to discover the status of the whole replica set at all times, and reconnect to a new primary should the current primary goes offline for any reason. See What's the point of periodic monitoring
To be able to monitor all nodes in a replica set, the driver must have access to each of the replica set member. Since your replica set is defined using internal IPs inaccessible by the driver, the driver cannot connect to them. This is the reason for the error you're seeing.
There are a couple of ways to solve this issue:
Use IP addresses or hostnames for the replica set configuration that are accessible by the driver (recommended).
Connect to one of the nodes without specifying a replica set, essentially treating the node as a standalone (not recommended).
If the older driver can connect without complaint, then either the driver is very outdated or doesn't follow the proper SDAM spec and should not be used, since its behaviour cannot be guaranteed. MongoDB publishes the SDAM spec and mandates all drivers to follow it for a good reason.

Kafka - Fail Connecting Remote Broker - NoBrokersAvailable [duplicate]

This question already has answers here:
NoBrokersAvailable: NoBrokersAvailable-Kafka Error
(7 answers)
Closed 1 year ago.
I have created a cluster (google cloud) with 3 nodes. Zookeeper is running on all nodes and I have started Kafka on one of the nodes. I can communicate (publish/consume) from any machine on the cluster but when i try to connect from a remote machine i get a NoBrokersAvailable exception.
I have opened ports in the firewall for testing and I have tried messing around with advertised_host and port in the Kafka config but I am unable to connect.
What is the expected configuration? - I would have expected, having suitable defaults, that my configuration would work in both the internal and remote case but it does not. I am not sure what part of the configuration of zookeeper/kafka would allow me to tweak this.
What is to be done?
Set the advertised.listeners=PLAINTEXT://<broker_ip>:9092 in server.properties file, and make sure this advertised address ingress is allowed through the GCP VPC firewall. Restart kafka-server and producer as well as consumer (whichever or if both is running)
Please check my answer to the same problem in another thread
NoBrokersAvailable: NoBrokersAvailable-Kafka Error