I tried to create a 2-node cluster with CouchDB 2.1 multiple times (both on windows and ubuntu) and it never worked. I did exactly as described in the official documentation here.
When I finish the cluster-configuration of the two nodes, I expect to create a database on node1, which should show up on node2. Verification on both nodes via fauxton fails also "internal server error" - that happens both under Linux (ubuntu 14.04) and windows (10, Server 2012, Server 2016) with version 2.1
Configuring both CouchDB-Nodes via API:
node1: (10.0.0.1)
1. POST {"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"mypassword", "node_count":"2"}
POST {"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"mypassword", "port": 5984, "node_count": "2", "remote_node": "10.0.0.2", "remote_current_user": "admin", "remote_current_password": "mypassword" }
POST {"action": "add_node", "host":"10.0.0.2", "port": "5984", "username": "admin", "password":"mypassword"}
POST {"action": "finish_cluster"}
http://10.0.0.1:5984/_membership
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#10.0.0.2","couchdb#localhost"]}
node2(10.0.0.2)
same configuration as node1, but IP address for other node changes to 10.0.0.1
http://10.0.0.2:5984/_membership
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#10.0.0.1","couchdb#localhost"]}
I never defined a zone - is this mandatory for the cluster to work?
Is anybody here who already set up a working Couchdb-Cluster with 2 or more nodes ?
Is anybody able to see a mistake I made whilst configuring the nodes? Please let me know if I can provide more information.
Help would be greatly appreciated.
best regards,
Harald
I've configured a 3 node cluster of CouchDB 2
In my opinion the main difficulty is to correctly setting the networking configuration between nodes for the erlang communication.
http://docs.couchdb.org/en/2.1.1/cluster/setup.html#cluster-setup
First, you should be sure that erlang is communicating between nodes. (http://docs.couchdb.org/en/2.1.1/cluster/setup.html#first-time-in-erlang-time-to-play)
You should set the erlang node name value in the vm.args file of your couchdb installation.
The name you use should be resolvable by a DNS or local hosts file in both nodes.
Finally, when you are sure that erlang is communicating you should register both nodes in the cluster.
Related
We have a go lang service which will go to redis, to fetch data for each request and we want to read data from redis slave node as well. We went through the documentation of redis and go-redis library and found that, in order to read data from redis slave we should fire readonly command from redis side. We are using ClusterOptions on go-redis library to setup a readonly connection to redis.
redis.NewClusterClient(&redis.ClusterOptions{
Addrs: []string{redisAddress},
Password: "",
ReadOnly: true,
})
After doing all this we are able to see (Using monitoring) that read requests are handled by master nodes only. I hope this is not expected and I am missing something or doing it wrong. Any pointers will be appreciated to solve this problem.
Some more context:
redisAddress in above code is single kubernetes cluster IP. Redis is deployed using kubernetes operator with 3 masters and 1 replica for each master.
I`ve done it setting the option RouteRandomly: true
I don't know if I understand the options of consul exec...
I have a consul server and several consul clients: https://play.golang.org/p/s3N3r3lK9e (example of config files)
I would like to create a service to run a program in each client:
"service": {
"name": "runner", "port": 7700,
"check": {
"script": "/usr/local/bin/myApp --run"
}
}
When a new KV is written in Consul, I want to execute an app in server side to run the service called "runner" in a specific node, in other words I want to execute in my application consul exec -service=runner to run another app (myApp --run) in the node client side. This is possible? This is the meaning of consul exec?
If you don't understand the question, I can rewrite it.
Usually it's used for common jobs on all nodes. For example, something like this: sudo apt-get update.
But, remember, it will be running on ALL nodes in cluster. So, if this command produce huge output, it will be the mess.
Secondly, there is no guarantee of execution.
For things like this I recommend to use system like Ansible, Chef, etc.
I've installed mesos-dns in our cluster and is running ok. We can check the domain of the apps installed in marathon but I would like to know in which host is installed the marathon itself. If I do a dig to marathon.domain is not resolving anything.
According to the doc of mesos-dns: "A records ({framework}.domain) and SRV records (_framework._tcp.{framework}.domain) - for every known Mesos master"
Thanks.
It's marathon.mesos unless you've used a different TLD. The Marathon scheduler runs on the Master.
You can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS.
You can use it like:
mesosdns-resolver.sh -sn <service-name>.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
I have created an orion context broker instance (FIWARE cloud portal image) that seems like with pepProxy installed. When I run "service pepProxy start" here is the feedback from the terminal:
Starting...
pepProxy dead but pid file exists
Starting pepProxy... Success
When check the status with "service pepProxy status", it says:
pepProxy dead but pid file exists
What can be done?
It seems something is preventing the PEP Proxy from starting. Have you checked "/var/log/pepProxy"? Please, also check what port the PEP Proxy is trying to bind to (usually 1026) and whether there is any other process actually running on that port (maybe the Context Broker is already running in that standard port).
In case the problem is a port conflict, you should change the Context Broker port in /etc/sysconfig/contextBroker or the one of the PEP Proxy in /etc/sysconfig/pepProxy.
If that's not the problem we would need some more information in order to help you.
Having received no replies on the Couchbase forum after nearly 2 months, I'm bringing this question to a broader audience.
I'm configuring CB Server 2.2.0 XDCR between two different Openstack (Essex, eek) installations. I've done some reading on using a DNS FQDN trick in the couchbase-server file to add a -name ns_1#(hostname) value in the start() function. I've tried that with absolutely zero success. There's already a flag in the start() function that says -name 'babysitter_of_ns_1#127.0.0.1' so I don't know if I need to replace that line, comment it out, or keep it. I've tried all 3 of those; none of them seemed to have any positive effect.
The FQDNs are pointing to the Openstack floating_ip addresses (in amazon-speak, the "public" ones). Should they be pointed to the fixed_ip addresses (amazon: private/local) for the nodes? Between Openstack installations, I'm not convinced pointing to an unreachable (potentially duplicate) class-C private IP is of any use.
When I create a remote cluster reference using the floating_ip address to a node in the other cluster, of course it'll create the cluster reference just fine. But when I create a Replication using that reference, I always get one of two distinct errors: Save request failed because of timeout or Failed to grab remote bucket 'bucket' from any of known nodes.
What I think is happening is that the Openstack floating_ip isn't being recognized or translated to its fixed_ip address prior to surfing the cluster nodes for the bucket. I know the -name ns_1#(hostname) modification is supposed to fix this, but I wonder if anyone has had success configuring XDCR between Openstack installations that may be able to provide some tips or hacks.
I know this "works" in AWS. It's my belief that AWS uses some custom DNS enabling queries to return an instance's fixed_ip ("private" IP) when going between availability zones, possibly between regions. There may be other special sauce in AWS that makes this work.
This blog post on aws Couchbase XDCR replication should help! There are quite a few steps so I won't paste them all here.
http://blog.couchbase.com/cross-data-center-replication-step-step-guide-amazon-aws