I have a dev server which I often push code changes to over Git. After each push, I need to manually log into the server and restart the supervisor processes.
Is there a way to have Supervisor monitor a filesystem directory for changes and reload the process(es) on changes?
You should be able to use an Event Listener which monitors the filesystem (with perhaps watchdog) and emits a restart using the XML-RPC API. Check out the memmon listener from the superlance package for inspiration. It wouldn't need to be that complicated. And since the watchdog would call your restart routine you don't need to read the events using childutils.listener.wait.
Alternatively, git hooks might do the trick if the permissions are correct for the supervisord API to be accessed (socket permissions, HTTP passwords). A simpler but less-secure approach.
A simpler and even less-secure approach would be to allow you to issue a supervisorctl restart. The running user has to match your push user (or git, or www, depending on how you have it setup). Lot's of ways to have it go wrong security-wise. But for development, might do fine.
Related:
Supervisord: is there any way to touch-reload a child?
I also didn't find any solution so I tried to make my own.
Here it is.
You can install the package by this command:
pip install git+https://github.com/stavinsky/supervisord-touch-reload.git
(I will add it to PyPI after adding some tests. )
An example of setting up supervisor located in examples folder in github. Documentation will be very soon, I believe.
Basically all you need to start use this module is add event listener with command like:
python -m touch_reload --socket unix:///tmp/supervisor.sock --file <path/to file file> --program <program name>
where file is a file that will be monitored with absolute or relative to directory path, socket is the socket from supervisorctl section and program is program name from [program:<name>] section definition.
Also available --username and --password, that you can use if you have custom supervisor configuration.
While not a solution which uses supervisor, I typically solve this problem within the supervised app. For instance, add the --reload flag to gunicorn and it will reload whenever your app changes.
I had the same problem and created Superfsmon which can do what you want: https://github.com/timakro/superfsmon
pip install superfsmon
Here's a simple example from the README:
To restart your celery workers on changes in the /app/devops
directory your supervisord.conf could look like this.
[program:celery]
command=celery -A devops.celery worker --loglevel=INFO --concurrency=10
[program:superfsmon]
command=superfsmon /app/devops celery
Here is one liner solution with inotify tools:
apt-get install -y inotify-tools
while true; do inotifywait -r src/ && service supervisor restart; done
Related
I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: falseā¦}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.
We use an ETL process to pull data from Google Cloud Storage, but annoyingly it hangs everytime Google releases udpates to GSUtil, because it sits at a prompt asking if you want to update the library. Fine if you are doing this manually, but not cool when it's being run in an automated SSIS package, as jobs don't finish for days and you keep wasting time with the same stupid cause.
I thought I was going to be cleaver, and add "python gsutil update -n" to the top of the bash script I'm automating the building/execution of in my SSIS Package in the hope to curb this problem, but when I run this command from the prompt in either Windows Server 2008r2 or Windows 7 I get the following:
C:\gsutil>python gsutil update -f -n
Copying gs://pub/gsutil.tar.gz...
OSError: The process cannot access the file because it is being used by another process.
Any help?
P.S. - Also, Google engineers... can you PLEASE remove these prompts? for all of us using these tools in automated processes? I have other things to work on, instead of constantly going back to things like this every few days/weeks.
What version of gsutil are you running?
Also, to be clear: Are you talking about the fact that gsutil checks for available software updates periodically, and if it finds them it then prompts you whether you want to update? Or are you talking about the fact that the gsutil update command asks if you want to perform the update?
If the former, gsutil shouldn't be performing this check/prompting if you are running gsutil from a script not connected to at TTY. If that's not working correctly we'd like to know.
And also, if that's the problem you're having, you can completely disable automated software update checks by setting software_update_check_period=0 in the [GSUtil] section of your .boto config file.
I'm looking at hosting a number of small, static websites and have been looking at a few alternatives including G-WAN. At the moment I'm just trying to get a feel for how well each server suits my needs before picking one.
G-WAN seems to do exactly what I want, though I'm running into problems with updating the configuration (by adding new folders) after the server's started. I can't find anything in the documentation or online about this, so I don't know if I'm doing anything dumb, running an unsupported configuration, or whether it's a feature that doesn't exist in G-WAN.
Here's my setup:
G-WAN 3.3.28 64-bit on Ubuntu 12.04.1 LTS.
I have what I think is the required minimal folder structure:
0.0.0.0_80
#0.0.0.0
www
$site.com
www
$othersite.com
www
I startup gwan via (I'm still messing around, so hopefully ):
sudo .\gwan -d
Everything works brilliantly. I add $thirdsite.com/, $thirdsite.com/www/, and $thirdsite.com/www/index.html; then when I try to visit thirdsite.com it gives me the root host (ie it doesn't seem to pick up the changes).
To reload the modified configuration, I have to either do:
sudo .\gwan -k; sudo .\gwan -d
or kill the non-angel process (kill -s 15) to restart the child process.
Can G-WAN reload the host definitions another way? If so, is it something that works out of the box or is there a command that can cycle the server without dropping requests made to other hosts (/is it safe to kill -s 15 on the non-angel process + if so, is there a reliable way to identify the process)? Thanks in advance!
G-WAN loads the host definitions at startup and does not check them as time goes to reload them dynamically.
To force a reload, you have to stop the child process (when in daemon mode) and v3.9+ keeps the old child alive the time to process any pending request while the new child accepts new connections.
Since stopping the child can also be done from the maintenance script or from a handler or from a servlet by just running exit(0) there is not need for a dedicated command.
Note that when you use kill you can pick the pid file from the gwan directory:
the parent process starts with a capital letter: Gwan_xxxx.pid
the child process starts with a lowercase letter: gwan_xxxx.pid
That will make your life easier.
I am working on a simple Perl app that copies another Perl app and builds all the required Apache config files.
The thing I can't seem to figure out is how to reload the apache config on the fly. I know I could do a system call and reload apache there, but that would mean I would have to get root access to this app, and that is a little scary.
Is there a way to ask apache to reload its config files from within the CGI container?
-------------------------Additional info------------------------------
I have done some more research and the problem is that Apache must be run with elevated privileges to bind to port 80. So one solution would be to set Apache to run on another port and forward that port to 80 via iptables. (This may be a last resort but a very messy solution).
Here is what gets me, Apache should be able to maintain its current port bindings and recheck its config files, all I am doing is adding another script alias.
Is there any way to add a new script alias with out a reload?
you also have the options to reload the config:
/etc/init.d/httpd reload
or
apachectl -k graceful
But unfortunately, those need root also. This differs from a normal restart in that currently open connections are not aborted. A side effect is that old log files will not be closed immediately. This means that if used in a log rotation script, a substantial delay may be necessary to ensure that the old log files are closed before processing them.
Also, if running Apache with daemontools you can do this by:
svc -h /service/apache
Sorry to ask a question then not give some one else the opportunity to answer but I figured out a solution and I hope it may help some one else.
What I had to do was leave the config alone it is not possible to reload in the manner that I required with out root privileges or some fancy port forwarding (That would make this application less portable than I would like).
So the only thing that Apache appears to load dynamically is the file system.
What I have done is used mod_rewrite to redirect the script requests and simply put them in /var/www/appname/copyname/cgi-bin/
When I used "reload" command, it didn't truly reloading all module like Asterisk start from beginning. So, when I just used "reload" command, I couldn't register SIP with my client application.
Is there any command that more truly restart the Asterisk?
restart now should work for you. If you have any concerns with a specific module, you can always try to do the following:
module load name_of_module.so
module unload name_of_module.so