xmonad --restart does not seem to be working - restart

As of recent changes to xmonad.hs (importing and using MouseResizeableTile layout and FindEmptyWorkspace action) xmonad --recompile's fine and if I log out and in again all is well, but if I issue an xmonad --restart nothing seems to happen. Certainly, my starthook is not run. As this behaviour is so totally unexpected I'm not sure where to begin to look. I have rolled back changes to the last time it worked, but to no avail.
darcs version 0.12 on ubuntu 14.04

This is a known problem when the workspace stack is too complicated to pass on to the newly spawned instance of xmonad - i.e. the arg list to the new instance exceeds the limits of the kernel. The failed instance falls back safely to the current running session.
Next question, how to increase the maximum argument length...

Related

Could switching VMs fix *** stack smashing detected ***

The scheduler I have been working on for my OS class has been getting a "*** stack smashing detected ***" error on the VM I'm using (I'm using Vagrant with virtualbox). This error occurs roughly 50% of the time I run the program.
When switching to the VM cluster provided by our professor (connected using SSH on the aforementioned VM), the error never showed up.
My first instinct was that my local VM didn't have enough memory allocated to it and that somehow the code I was running was going out of bounds of where my VM could access. (the test involved performing 128 matrix multiplications of varying sizes each in its own thread)
Can anyone confirm if this is a feasible explanation? My fear is the error is just being ignored on the other VM (I use the same makefile for both that compiles with flags -g and -lm).
Thanks!
Stack smashing detected is caused when your program overwrites "canary" memory that is above the area where its local variables are located. It's usually due to writing more elements of a local array than were allocated for it. A bug-free program should never do this on any machine, no matter how much or how little memory is available. So your program is buggy and needs to be fixed.
In particular, this error is not caused by simply running out of stack space.
Most likely the other VM has its compiler configured to disable this check by default. You may be able to re-enable it with -fstack-protector. But either way, you should investigate and fix this bug on whichever machine lets you reproduce it.

mongod main process killed by KILL signal

One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)

Can I configure icecream (icecc) to do zero local jobs

I'm trying to build a project on a rather underpowered system (intel compute stick with 1GB of RAM). Some of the compilation steps run out of memory. I've configured icecc so that it can send some jobs to a more powerful machine, but it seems that icecc will always do at least one job on the local machine.
I've tried setting ICECC_MAX_JOBS="0" in /etc/icecc/icecc.conf (and restarting iceccd), but the comments in this file say:
# Note: a value of "0" is actually interpreted as "1", however it
# also sets ICECC_ALLOW_REMOTE="no".
I also tried disabling the icecc daemon on the compute stick by running /etc/init.d/icecc stop. However, it seems that icecc is still putting one job on the local machine (perhaps if the daemon is off it's putting all jobs on the local machine?).
The project is makefile based and it appears that I'm stuck on a bottleneck step where calling make with -j > 1 still only issues one job, and this compilation is expiring the system memory.
The only work around I can think of is to actually compile on a different system and then ship the binaries back over but I expect to enter a tweak/build/evaluate cycle on this platform so I'd like to be able to work from the compute stick directly.
Both systems are running ubuntu 14.04 if that helps.
I believe it is not supported since if there are network issues, icecc resorts to compiling on the host machine itself. Best solution would be to compile on the remote machine and copy back the resulting binary.
Have you tried setting ICECC_TEST_REMOTEBUILD in client's terminal (where you run make)?
export ICECC_TEST_REMOTEBUILD=1
In my tests this always forces all sources to be compiled remotely.
Just remember that linking is always done on local machine.

Simultaneous mongo instances no longer possible?

I have a production and a development server. The production server is running a mongod and my development server is running 2 instances:
1 - as "slave" for production (using a replicaSet). This duplicates my data and allows for easier backups.
2 - a "master". My development collections (couldn't use the slave instance, regarding slaveOkay etc).
They both have their own pid file, data folder, everything. This has been working without much issues for well over a year.
Unfortunately since the last version it seems that whenever I start the one instance, it terminates the other one (prod slave <> dev master). No matter which one gets started first, the other one is always stopped.
Anyone any idea why mongo behaves like this all of a sudden and a solution for the problem?
Using the master instance to house the development collections is not really an option for me for various reasons.
Hope this makes things a bit more clear:
production writes --> production [master] --[replicaSet]--> development instance 1 [slave]
development writes --> development instance 2 [master]
Thanks!
Answering my own question for posterity:
It seems the init script for mongo version 2.4.6 has a peculiarity in it. It determines the PID file with the following expression:
PIDFILE=`awk -F= '/^dbpath\s=\s/{print $2}' "$CONFIGFILE"`
This looks for the dbpath with a whitespace before and after the equal sign.
Since the default config file contains "dbpath=/var/lib/mongo", the result of this expression is empty.
I imagine the init script - without any specific .pid file being given - just uses the default .pid file location. Normally (with one instance) this has no consequences. In my case it causes the other mongo instance to be terminated.
I've always used the "pidfilepath" config directive up to now (don't recall if this was already in there when I first installed it or not). So basically I've updated my init scripts to (and note the missing "\s"'es)
PIDFILE=`awk -F= '/^pidfilepath=/{print $2}' "$CONFIGFILE"`
Then I made sure the config statements didn't have spaces:
pidfilepath=/var/run/mongo/mongod-name.pid
This solves my problem. I hope it does the same for anyone else in the same situation.
Why use the dbpath for the pid file anyway?

Replica set never finishes cloning primary node

We're working with an average sized (50GB) data set in MongoDB and are attempting to add a third node to our replica set (making it primary-secondary-secondary). Unfortunately, when we bring the nodes up (with the appropriate command line arguments associating them with our replica set), the nodes never exit the RECOVERING stage.
Looking at the logs, it seems as though the nodes ditch all of their data as soon as the recovery completes and start syncing again.
We're using version 2.0.3 on all of the nodes and have tried adding the third node from both a "clean" (empty db) state as well as a bootstrapped state (using mongodump to take a snapshot of the primary database and mongorestore'ing that snapshot into the new node), each failing.
We've observed this recurring phenomenon over the past 24 hours and any input/guidance would be appreciated!
It's hard to be certain without looking at the logs, but it sounds like you're hitting a known issue in MongoDB 2.0.3. Check out http://jira.mongodb.org/browse/SERVER-5177 . The problem is fixed in 2.0.4, which has an available release candidate.
I don't know if it helps, but when I got that problem, I erased the replica DB and initiated it. It started from scratch and replicated OK. worth a try, I guess.