I already googled this info but found nothing decisive. I want to clear my zookeeper, is it alright if I run rmr on all root nodes?
Thanks
Feel free to delete every node that your system it not using.
Also, you may want to use ephemeral nodes, that get deleted automatically when the connession with zookeeper server is dropped.
Related
I'm getting stuck in the configuration of a deployment. The problem is the following.
The application in the deployment is using a database which is stored in a file. While this database is open, it's locked (there's no way for read/write access for many).
If I delete the running POD the new one can't get in ready state, because the database is still locked. I read about preStop-Hook and tried to use it without success.
I could delete the lock file, which seems to be pretty harsh. What's the right way to solve this in Kubernetes?
This really isn't different than running this process outside of Kubernetes. When the pod is killed, it will be given a chance to shutdown cleanly. So the lock should be cleaned up. If the lock isn't cleaned up, there's not a lot of ways you can determined if the lock remains because an unclean shutdown was made, or a node is unhealthy, or if there is a network partition. So deleting the lock at pod startup does seem to be unwise.
I think the first step for you should be trying to determine why this lock file isn't getting cleaned up correctly. (Rather than trying to address the symptom.)
I am running 3 node zookeeper cluster to process storm and kafka.Zookeeper Data directory eats up all the space in my system.I am not sure how to clean it up.As, I don't want to delete the data entirely because i will lose the state of the processes.I looked into autopurge.purgeInterval in zoo.cfg, but it doesn't work as I expected.
I am using zookeeper 3.4.6
How can I delete the old data without affecting the new ones?
Assuming you have zookeeper installed in /opt/zookeeper-3.4.6, the following will delete all but the last 10 snapshots.
stop zookeeper (/opt/zookeeper-3.4.6/bin/zkServer.sh stop)
cd /opt/zookeeper-3.4.6
./zkCleanup.sh ../data/ -n 10
Probably wise to make a backup of the whole /opt/zookeeper-3.4.6 dir before doing this.
Stop Zookeeper
Go to the bin folder of your Zookeeper
Run ./zkCli.sh
Use ls / to check Zookeeper's content
Identify what you want to delete with the exact path
Delete /znode or path of what you want to delete
I'm new to curator and zk - and wanted to double check my understanding with the rest of the community. It seems that documentation for curator is not that well covered.
Are curator's persistent ephemeral nodes basically ephemeral znodes, but have extra mechanisms to re-establish connections once it's disconnected? Are there any other differences that are not obvious?
p.s. is there a community/discussion group for zk (or even better yet, curator)? A simple google search did not turn up anything.
The PersistentEphemeralNode recipe makes sure that a specified EPHEMERAL node exists even if there is a server partition, etc. The recipe creates the node internally, monitors the connection and recreates the node if it gets deleted due to connection instability.
The Apache Curator website - http://curator.apache.org/ - has documentation. It also lists the mailing lists for Curator: http://curator.apache.org/mail-lists.html
NOTE: I'm the main author of Curator
After installing and setting up a 2 node cluster of postgres-xl 9.2, where coordinator and GTM are running on node1 and the Datanode is set up on node2.
Now before I use it in production I have to deliver a DRP solution.
Does anyone have a DR plan for postgres-xl 9.2 architechture?
Best Regards,
Aviel B.
So from what you described you only have one of each node... What are you expecting to recover too??
Postgres-XL is a clustered solution. If you only have one of each node then you have no cluster and not only are you not getting any scaling advantage it is actually going to run slower than stand alone Postgres. Plus you have nothing to recover to. If you lose either node you have completely lost the database.
Also the docs recommend you put the coordinator and data nodes on the same server if you are going to combine nodes.
So for the simplest solution in Replication mode you would need something like
Server1 GTM
Server2 GTM Proxy
Server3 Coordinator 1 & DataNode 1
Server4 Coordinator 2 & DataNode 2
Postgres-XL has no fail over support so any failure will require manual intervention.
If you use the replication DISTRIBUTED BY option you would just remove the failing node from the cluster and restart everything.
If you used another DISTRIBUTED BY options then data is shared over multiple nodes which means if you lose any node you lose everything. So for this option you will need to have a slave instance of every data node and coordinator node you have. If one of the nodes fails then you would remove that node from the cluster and replace it with its slave backup node. Then restart it all.
I wonder about the best strategy with regard to Zookeeper and SolrCloud clusters. Should one Zookeeper cluster be dedicated per SolrCloud cluster or multiple SolrCloud clusters can share one Zookeeper cluster? I guess the former must be a very safe approach but I am wondering if the 2nd option is fine as well.
As far as I know, SolrCloud use Zookeeper to share cluster state (up, down nodes) and to load core shared configurations (solrconfig.xml, schema.xml, etc...) on boot. If you have clients based on SolrJ's CloudSolrServer implementation than they will mostly perform reads of the cluster state.
In this respect, I think it should be fine to share the same ZK ensemble. Many reads and few writes, this is exactly what ZK is designed for.
SolrCloud puts very little load on a ZooKeeper cluster, so if it's purely a performance consideration then there's no problem. It would probably be a waste of resources to have one ZK cluster per SolrCloud if they're all on a local network. Just make sure the ZooKeeper configurations are in separate ZooKeeper paths. For example, using -zkHost :/ for one SolrCloud, and replace "path1" with "path2" for the second one will put the solr files in separate paths within ZooKeeper to ensure they don't conflict.
Note that the ZK cluster should be well-configured and robust, because if it goes down then none of the SolrClouds are going to be able to respond to changes in node availability or state. (If SolrCloud leader is lost, not connectable, or if a node enters recovering state, etc.)