My server is running searchd (Sphinx Search) in TWO processes but can't understand the reason why. Before it used to be one. What has changed recently is that I have introduced Delta indexing to the Sphinx.Conf which has been working fine from what I can tell. I have a cron merging in delta indexes to the main indexes as you'd expect.
Did the introduction of delta indexing create this new process instance?
If it helps, when I sudo searchd the processes are made one after another, which removes the possibility that it gets created later down the line.
I think 2 is normal. One is the actual process serving requests, the other is a watchdog. It sole purpose is if the main process dies, its automatically respawned.
http://sphinxsearch.com/docs/current.html#conf-watchdog
There may also be more processes, when activly serving requests. Depending on exact settings, more processes may startup just for request.
Related
I have a problem in production on my cluster.
Our monitoring fail on monitoring disk space and this over.
And i needed to remove some part of data directly on a master shard node.
I say on mongod with command:
db.collection.remove({query})
I know this is dangerous but is my only option at moment because i can't open mongo shell on mongos.
Now cluster works well but i need to know the real impact of my action.
And how to solve.
The real impact is that you lose the data you deleted. There should be no other operational impact on the database itself. It should just return nothing when the affected documents are requested.
I'm sure you understand that this deletion directly into a shard (bypassing mongos) is not a recommended action by any means. In general, bypassing mongos could result in an undefined behavior of the cluster, and the resulting issue could stay dormant for a long time. In the worst case, this would lead to corrupt backup.
Having said that, deletion using the mongo shell (or a driver) is much preferred compared to going into the dbPath directory and deleting files. That action could lead into unrecoverable database.
The more immediate impact may be felt by the application, e.g. if your application expects a result and it receives none. I would encourage you to test all workflows of your application and confirm that everything is working as expected.
I'm running matlab on a cluster. when i run my .m script from an interactive matlab session on the cluster my results are reproducible. but when i run the same script from a qsub command, as part of an array job away from my watchful eye, I get believable but unreproducible results. The .m files are doing exactly the same thing, including saving the results as .mat files.
Anyone know why run one way the scripts give reproducible results, and run the other way they become unreproducible?
Is this only a problem with reproducibility or is this indicative of inaccurate results?
%%%%%
Thanks to spuder for a helpful answer. Just in case anyone stumbles upon this and is interested here is some further information.
If you use more than one thread in Matlab jobs, this may result in stealing resources from other jobs which plays havoc with the results. So you have 2 options:
1. Select exclusive access to a node. The cluster I am using is not currently allowing parallel array jobs, so doing this for me was very wasteful - i took a whole node but used it in serial.
2. Ask matlab to run on a singleCompThread. This may make your script take longer to complete, but it gets jobs through the queue faster.
There are a lot of variables at play. Ruling out transient issues such as network performance, and load here are a couple of possible explanations:
You are getting assigned a different batch of nodes when you run an interactive job from when you use qsub.
I've seen some sites that assign a policy of 'exclusive' to the nodes that run interactive jobs and 'shared' to nodes that run queued 'qsub' jobs. If that is the case, then you will almost always see better performance on the exclusive nodes.
Another answer might be that the interactive jobs are assigned to nodes that have less network congestion.
Additionally, if you are requesting multiple nodes, and you happen to land on nodes that traverse multiple hops, then you could be seeing significant network slowdowns. The solution would be for the cluster administrator to setup nodesets.
Are you using multiple nodes for the job? How are you requesting the resources?
I wonder about this statement about config servers in mongodb (from documentation):
If any of the config servers is down, the cluster's meta-data goes
read only. However, even in such a failure state, the MongoDB cluster
can still be read from and written to.
We can use 1 or 3 config servers. Why if we use 3 config servers and one server is down, cluster goes to read-only mode?
As you can see from the link above, Each config server has a complete copy of all chunk information.
If each sesrver have a complete copy of all chunk information, why does it goes read only after one config server is down?
So, the reason for this is the way that config servers do their 2 phase commits. While if you have one config server and it fails, then your whole system fails. If you have 3 and one fails all of the metadata is still available, but you lose the resiliency factor of 2 phase commits. You cannot do 2 phase commits without 3 members.
So you may still run off of the other two for reads, but the balancer is essentially turned off so that no chunk migration or splits happen (hence metadata becomes read only). This is because you cannot commit splits or migrations using the commit process a 3 node config setup uses, so they don't happen.
Running with 1 config server is not recommended. Basically if it goes down, you don't know where any of your data is.
A 2 phase commit only works with 3 machines because it can make sure your data stays in a consistant state. It means that if a machine died in the middle of an update, that update will either fail or persist depending on whether it was committed to at least another node which will update the third, (hence 2 phase commit). So it is safe to read a sharded cluster using the 2 config servers left.
You can't do that with 2 nodes. It might have gone through, it might have not, you cant tell because you cannot compare the last remaining node to anything since the other one is down. So the safe thing to do is not take any updates until you get the third node back up, otherwise you may be reading out of date data.
If you want seamless failure resistance, then it doesn't make sense to use 2 due to why you use the 2 phase commit. It really has no more durability then 1 node if you would rather have nothing then potentially incorrect data. And in a sharded cluster, nothing and incorrect data go hand in hand since either way you don't know where to find your chunks.
Its basically done to protect you from potential config data corruption and inconsistencies.
Consider the following setup:
There a 2 physical servers which are set up as a regular mongodb replication set (including an arbiter process, so automatic failover will work correctly).
now, as far as i understand, most actual work will be done on the primary server, while the slave will mostly just do work to keep its dataset in sync.
Would it be reasonable, to introduce sharding into this setup in a way that one would set up another replication set on the same 2 servers, so that each of them has one mongod process running as primary and one process running as secondary.
The expected result would be that both servers will share the workload of actual querys/inserts while both are up. In the case of one server failing the whole setup should elegantly fail over to continue running, until the other server is restored.
Are there any downsides to this setup, except the overall overhead in setup and number of processes (mongos/configservers/arbiters)?
That would definitely work. I'd asked a question in the #mongodb IRC channel a bit ago as to whether or not it was a bad idea to run multiple mongod processes on a single machine. The answer was "as long as you have the RAM/CPU/bandwidth, go nuts".
It's worth noting that if you're looking for high-performance reads, and don't mind writes being a bit slower, you could:
Do your writes in "safe mode", where the write doesn't return until it's been propagated to N servers (in this case, where N is the number of servers in the replica set, so all of them)
Set the driver-appropriate flag in your connection code to allow reading from slaves.
This would get you a clustered setup similar to MySQL - write once on the master, but any of the slaves is eligible for a read. In a circumstance where you have many more reads than writes (say, an order of magnitude), this may be higher performance, but I don't know how it'd behave when a node goes down (since writes may stall trying to write to 3 nodes, but only 2 are up, etc - that would need testing).
One thing to note is that while both machines are up, your queries are being split between them. When one goes down, all queries will go to the remaining machine thus doubling the demands placed on it. You'd have to make sure your machines could withstand a sudden doubling of queries.
In that situation, I'd reconsider sharding in the first place, and just make it an un-sharded replica set of 2 machines (+1 arbiter).
You are missing one crucial detail: if you have a sharded setup with two physical nodes only, if one dies, all your data is gone. This is because you don't have any redundancy below the sharding layer (the recommended way is that each shard is composed of a replica set).
What you said about the replica set however is true: you can run it on two shared-nothing nodes and have an additional arbiter. However, the recommended setup would be 3 nodes: one primary and two secondaries.
http://www.markus-gattol.name/ws/mongodb.html#do_i_need_an_arbiter
I have recently had to install slony (version 2.0.2) at work. Everything works fine, however, my boss would like to lower the cpu usage on slave nodes during replication. Searching on the net does not reveal any blatantly obvious answers to this. Any suggestions that would help reduce CPU usage (or spread the update out over a longer period) would be very much appreciated!
Have you looked into general PostgreSQL tuning here? The server can waste a lot of CPU cycles doing redundant work if it's not given enough resources to work with, and the default config is extremely small. Tuning Your PostgreSQL Server is a useful guide here, shared_buffers and checkpoint_segments are the two parameters you might get some significant improvement from on a slave (many of the rest only really help for improving query time).
Magnus might be right, this could very well just be a symptom of the fact that your database has very high traffic. Slony effectively multiplies the resource usage of any given DML operation: not only is data CRUD'ed to the replication master, but every time that happens, a Slony trigger (think of it as a change listener) generates an identical transaction and forwards it to the Slon process, which runs it on other members of the cluster.
However, there are two other possible explanations/solutions to this issue:
A possible solution might be to run the slon processes on a separate machine from your database hosts. Even if you have a single-master/single-slave replication scheme, it is advantageous in terms of stability, role-segregation, and performance (that’s you) to run the slon replication daemons on a physically different set of hardware (on the same LAN segment, ideally). There is nothing about Slony that says it has to run on the same machine as a given database host, so putting it in a different location (think “traffic controller”) might relieve some of the resource load on your database hosts. This is also a good idea in terms of both machine stability and scalability.
There's also a chance that this is only a temporary problem caused by the fact that you recently started using Slony. When you first subscribe a new node to a replication set, that node (and, to some extent, its parent) experiences VERY heavy CPU load (and possibly disk load as well) during the subscription process. I'm not sure how it works under the covers, but, depending on how much data was already on the node subscribed, Slony will either check the master’s data against every single piece of data present on the slave in tables that are replicated, and copy data down to the slave if it is missing or different. These are potentially CPU-intensive operations. Especially in large databases, the process of subscription can take a very long time (it took over a day for me, but our database is over 20GB), during which CPU load will be very high. A simple way to see what Slony is up to is to use pgAdmin’s Server Status viewer, which, while limited, will give you some useful info here. If there are a lot of “prepare table for replication” or “cleanup table after replication” operations in progress on the node that has a high CPU load, it’s probably because a subscription isn’t complete. pgAdmin’s status viewer isn’t too informative, however; there are more reliable ways of checking subscription progress using Slony directly. Section 4.7.6.4 in the Slony log-analysis documentation might help with that, as would reading the doc for SUBSCRIBE SET (pay special attention to the boxed warning message, and the "Dangerous/Unintuitive Behavior" section. A simple yet definitive hack to tell whether a set is still in the process of subscriptions is to run a MERGE SET and try to merge it with an empty (or not) other set. MERGE SET will fail with a "subscriptions in progress" error if subscription is still running. However, that hack won't work on Slony 2.1; MERGE SET will just wait until subscriptions are finished.
The best way to reduce the CPU usage would be to put less data into the database :-)
Other than that, you can experiment with sync_interval. It may be what you're looking for.