I am a noob in Solr and zookeeper and trying to learn by myself. I understood that zookeeper is a file structure that manages solr cluster and prevents race condition using locks. I didn’t understand what is upconfig and downconfig and when we do that. It would be of great help if someone can give me a clear picture on it. Thanks in advance!
A better and more general description of Zookeeper is an application that provides centralised configuration for distributed systems. So in Solr Cloud, you can have multiple Solr instances across multiple servers acting together as a single cloud. However, if you want to update a collection's configuration, you don't want to have to go to each server and update them all individually. You want only one version of the config which is then used by any collection that needs it. Hence the conf commands.
upconfig uploads a configuration to ZooKeeper, which then ensures that all collections using that configuration (throughout the Cloud, on all the servers) have that specific config. So you only need to upload it once, on one server.
downconfig lets you fetch a configuration from Zookeeper.
Related
I am going to create a load balancer in Azure. I have a VM that already running and going to take a backup of the existing VM and will create another VM using that backup. So two servers will have the same configuration and will use the same credentials.
In the already existing server, I have MongoDB configured, and if I create the same VM that will also have the same configuration as the old VM. Now what I want to know is can I use the same MongoDB which will be accessed by two servers that have the same configurations?
Will it create any mess or any give any error?
can I use like above mentioned?
Do I need to configure another MongoDB for the second server?
can anyone please clarify my questions? it would be great to have some clear explanation. thank you
MongoDB has build in support for horizontal scalability and high availability meaning that you dont need to create a 3th party load balancer , the mongos service part of mongoDB sharding cluster is the load balancer itself. Check the official documentation for mongoDB replication and sharding ...
On your questions:
Will it create any mess or any give any error?
If you just copy data to another VM it will be fine , as far as you dont write to one of the VMs you can loadbalance reads between this independent VMs , but this is strange approch when you have build in mongoDB replication mechanism and you can just add the second VM as a SECONDARY member from replicaSet.
can I use like above mentioned?
Sure , you can use also this approach but why you will need to do it?
Do I need to configure another MongoDB for the second server?
Depends on the use case , but in general you would prefer to create 3x members replicaSet or if your database is large and write performance is strong requirement you may need to distribute the database between multiple servers ( shards ) so you will need more then just 3x servers ...
I'm fairly new to MongoDB (Atlas - free tier), where I have created a project using it for storing my data. I had it set up and working fine for a couple of weeks, when suddenly I received an email with: An alert is open for your Atlas project: Replica set has no primary. I have no idea what this means and I don't believe I have done anything in the last couple of days/weeks that could warrant this alert. However, after checking my project, it seems that I can no longer connect to my cluster and access my data.
After checking on MongoDB Cloud, it seems that my cluster has stopped working and only the secondary shard (don't know if this is the right terminology) is running, while the other two seem to be down. Can anyone explain what this means, why it is happening or how to fix it? Thanks.
To troubleshoot issues like this, read the server logs and act based on the information therein.
For free and shared tiers in Atlas the logs are apparently not available. Therefore:
For a free tier cluster (M0), delete this cluster and create a new one. If you don't have a backup you should be able to dump via a direct connection to any of the operational secondary nodes or using the secondary read preference.
For a shared tier cluster (M2/M5), use the official MongoDB support channels for assistance.
Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.
I have a proprietary CMS that keeps a lot (20k lines) of configuration files on disk. I have quite a few nodes, all with the same configurations except for one or two elements that designate the node name and the IP.
Since this is proprietary I do not have a lot of leverage for going in and completely overhauling the configuration loading to look at an endpoint, though I might be able to be creative.
My questions are simple, but I do not know a better place to answer them:
Is this a use case for distributed configuration management like Zookeeper? Ideally I'd like to spin up a box and have it look for a service endpoint to load config files rather than have the config files deployed through source. This way I can update the configuration in one place, and have it replicate to all nodes without doing a full deployment.
Can Zookeeper (or equivalent) mimic a file system? Could I mount an NFS point and have it expose configuration as if they were files on the filesystem, even if these are symbolic constructs? Does this make sense?
Your configuration use case seems more like a a job for chef, puppet or similar system. They will allow you to update the configuration in one place, keep them version controlled, and distribute them properly to all target nodes.
Zookeeper makes sense when your application/service needs to dynamically get fresh configuration data during live operation, and when multiple nodes in your system need the same consistent view of that data. If you don't have this requirements, Zookeeper might be too much of an overhead for just laying down mostly static config files on disk.
As for mimicking a filesystem, there is zkfuse which you could use to mount it. But again, it doesn't look like this is what you want. Zookeeper should not be used as an actual file system replacement or file distribution system. It is best for storing small bits of metadata that needs to be consistent across your distributed system.
I've got a group of servers that currently use both memcached and repcached side by side (listening on different ports). The memcached service is used to store local data that doesn't need to be shared. The repcached instance is used to allow pairs of servers to collaborate.
When I found Couchbase I was really excited because it looks like it would allow me to:
Make some data persistent
Share with more than two nodes
Leave most of my code as-is since it uses the memcached API
So I installed Couchbase but I've run into a problem--it doesn't look like there's a way to setup two clusters on the same server. I'd like one cluster that doesn't share with any other server and a second cluster that does share with other servers.
Yes, I could setup several dedicated servers for Couchbase to create different clusters but I've got plenty of CPU + ram to spare on the servers that are currently running memcached + repcached so I'd prefer to just replace those services with Couchbase.
Is it possible to run two instances of Couchbase on the same host? I realize I'd have to change some ports around. I just haven't seen anyone talking about doing anything like this so I'm thinking the answer is "no"... but I had to ask because it looks like Couchbase would be perfect for my needs.
If this won't work then I'd be interested in any alternative suggestions. For example, one idea I had was using Memcached + MemcacheDB to emulate a persistent non-shared Couchbase cluster. However, I don't like the fact that MemcacheDB doesn't support expiring records and I'd rather not have to write a routine to delete millions of records each month (and then wonder if performance will degrade over time).
Any thoughts would be appreciated. :-)
The best solution here is probably to run a single instance of Couchbase and create one memcached bucket and one Couchbase bucket. The memcached bucket won't have persistence and will function exactly like memcached. The other bucket will have persistence and supports the memcached api. You can create as many buckets as you want in a single Couchbase server.
Your other option is to virtualize and run a Couchbase server on each vm.