I'm running an apache2 / mod_perl2 combo on our development server.
When I'm developing, my changes are instantly reflected in the webpage I'm working on. I assumed mod_perl was being clever and was reloading files when they were changed.
But now another developer is working on a different part of the system and their changes are not picked up by mod_perl. He has to restart apache before he can see his changes.
Is there a way to disable caching on our development server, or get mod_perl to pick up his changes?
Thanks.
EDIT: I'm editing file directly on the dev server using VI, the other developer has mounted their dev directory via samba, and is editing their files in windows. This seems to be the difference that prevents mod_perl picking up changes.
I just read a nice blog post that sums up all possible ways to achieve this: How not to restart mod_perl servers by Jonathan Swartz
What exactly is the other developer changing?
To reload modules when they have changed you would use Apache2::Reload. (Though see Performance Issues before thinking about using this in production.)
Even without that, mod_perl will reload cgi scripts when they change; I don't know of any way the other developer could have turned that off if you are talking about cgi scripts.
Since it's just for development, how about just killing all of the child processes and letting the parent apache process respawn?
kill -9 $(ps axf | grep httpd | egrep -e ' S ' | cut -b1-5 | paste -s -d ' ')
The above command works for me on my box in my environment, your mileage will vary.
It's not an elegant solution by any means, but hey, it's faster than a full apache restart.
Related
I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~
I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)
Does MongoDB create a file I can poll for in order to determine when prealloc is done? Right now I have a script to run rs.init(..config..), but I need to wait with triggering it until mongod is up and running.
Since tail -f | grep .. | xarg.. the log file is a bit of a flaky hack, I wondered if there is any other way to determine that mongod is done with prealloc?
We have the same problem for testing replica sets with the PHP driver. Here we use the mongo shell's ReplSetTest() functionality to get around this. You can see here how that works:
https://github.com/mongodb/mongo-php-driver/blob/master/tests/utils/myconfig.js#L9
However, I am not sure how well this works for non-test environments as the amount of options you can give are rather limited (such as, you can't set a data dir properly as things are hardcoded). All the functions and code for this is all in JavaScript at https://github.com/mongodb/mongo/blob/master/src/mongo/shell/replsettest.js — this should give you an overview how it works and allows you to rewrite it in your preferred language.
try to use inotify (i am not sure exactly), for example, if it is necessary to determine that the file is closed after writing:
[maverick#mutabor ~]$ pyinotify -e IN_CLOSE_WRITE /tmp/testfile
I came across the "--delete-after" option when I was reading the manpage of wget ?
what's the purpose of providing such an option ? Is it just for testing the page is ok for downloading ? Or maybe there are other situations where this option is useful, I hope you guys may give me some hints.
With reference to your comments above. I'm providing some examples of how we use it. We have a few websites running on Rackspace Cloud Sites which is a managed cloud hosting solution. We don't have access to regular cron.
We had an issue with runaway usage on a site using WordPress because WP kept calling wp-cron.php. To give you a sense of runaway usage, it used up in one day the allotted CPU cycles for a month. Anyway what I did was disable wp-cron.php being called within the WordPress system and manually call it through wget. I'm not interested in the output from the process so if I don't use --delete-after with wget (wget ... > /dev/null 2>&1 works well too) the folder where wget runs would get filled with hundreds of useless logs and output of each time the script was called.
We also have SugarCRM installed and that system requires its cron script to be called to handle system maintenance. We use wget silently for that as well. Basically a lot of these kinds of web-based systems have cron scripts. If you can't call your scripts directly say using php on the machine then the other option is calling it silently with wget.
The command to call these cron scripts is quite basic - wget --delete-after http://example.com/cron.php?parameters=if+needed
I'm using wget (with cron) to automate commands to a web application, so I have no interest in the contents of the pages. --delete-after is ideal for this.
You can use it for testing if a page is downloading ok, but usually it's used to force proxy servers to cache their contents.
If your sitting on a connection where there's a network appliance caching content between the site and your endpoint, and you have a site that's popular among users on that network, then what you may want to do as a sysadmin, is to use a down level machine just after the proxy to script a recursive "-r" or mirror "-m" wget operation.
The proxy appliance will see this and pre-cache the site and it's assets, thus making site accesses for uses after said proxy a bit faster.
You'd then want to specify "--delete-after" to free up the disk space used unless your wanting to keep a local copy of all sites you force to cache.
Sometimes you only need to visit a website to set an IP address - say if you are rolling your own dyn dns service.
Lets say I have an Emacs-Server running on some remote server, with all the libraries and software necessary for running my application.
Then I want several clients to connect to that remote machine, using Emacs-client. Does each client need a full Emacs installation, or is there a minimal installation that is just enough to communicate with the remote server, where all the action is?
Could this (Emacs-)client installation be so minimal, that almost all software-updates can be done on the server, without affecting the Emacs-clients?
Is there a reason not to run the clients remotely as well, and simply use a local display? That way, pretty much all you need on the local machines is the ssh client and the X Window server.
ssh -X (user)#(server) "emacsclient -c"
Edits for the comments:
This command starts a new client to connect to an existing Emacs server (which it assumes is already running). You can use "emacsclient -a '' -c" to automatically start emacs --daemon if there is no existing server, but I don't know whether you want the connecting user to be starting the server.
In fact, I'm pretty unsure about the whole multi-user side of this to be honest, as I've never done that before. Authentication for the above is handled by ssh, but there may well be subsequent permission issues to deal with, or similar, when the server and the clients are started by different users.
This approach should be possible with Windows/Cygwin as client and/or server, as Cygwin provides Emacs, OpenSSH, and X.org packages. (I regularly use Windows/Cygwin as a local display for Emacs running on Linux.) It may be harder to set up, though, and any permissions issues are probably different when you're using Cygwin.
I'm less sure how this would work without Cygwin. NTEmacs certainly won't talk to X.org, so I imagine you'd be terminal based in that instance. (There are probably other options, but Cygwin sounds to me like the best-integrated approach to using all of Emacs, SSH, and X on Windows).
Lastly, I imagine you're probably getting your "Connection refused" error because localhost is not running a sshd daemon? I would say that configuration of ssh is outside the scope of this question, but there are lots of resources online for that.
Depending on what you're trying to achieve, you may be able to use a combination of Emacs and Screen. By starting up Emacs from Screen on the remote machine and detaching from it, you can subsequently re-attach from a different machine that doesn't have Emacs. Again, whether this will work for you or not depends on what you're trying to do; however, for many Emacs use-cases, this can be very effective. If you're not familiar with using Screen in this manner, here is some reading material:
screen - The Terminal Multiplexer
I am not sure that would be possible. emacsclient uses tramp to connect to a remote server, and just by looking at the number of requires in the tramp elisp files (41) it seems very unlikely. You can try it yourself with the following:
zgrep -oE "\(require '[a-z-]+\)" *el.gz | sed -e 's%[a-z0-9-]\+\.el\.gz:%%g' | sort | uniq -cu | wc -l
I'm not an expert in emacsclient, but I don't think is was designed to do what you're looking for. I think the general use case is that emacsclient allows you to redirect new requests to open a file with emacs to a persistent emacs process to avoid what may be a bit of an overhead in startup time. You seem to be looking for more of a true client/server relationship.
I think to meet the goal you're aiming at you'll probably need to look a little outside emacs, probably a project unto itself - 'emacsRemoteClient. It boils down to one or two models; the file you want to edit would need to have it's path sent over to the server machine so that emacs could do some sort of remote tramp access & then spawn the xwindow locally (using the local X env or requiring an x server on windows)... or two, transferring the file to some temp location on the server box and again spawning the remote x window locally (followed by syncing the changes between the tmp & local file).
Would be cool to have something like that... but suspecting it'll involve a bit of work. Maybe we just need a version of emacs written in javascript and it can live in the cloud or on your browser... oh to have emacs keybindings in the browser ;-)
-Steve
I want to deploy a PSGI scripts that runs in Apache2 with Plack. Apache is configured with:
<Location "/mypath">
SetHandler perl-script
PerlResponseHandler Plack::Handler::Apache2
PerlSetVar psgi_app /path/to/my/script.psgi
</Location>
When I test the script with plackup, the --reload parameter watches updates on the .psgi file. In the production environment it is fine that Apache and Plack do not check and restart on each change for performance reasons, but how can I tell them explicitly to restart Plack::Handler::Apache2 and/or the PSGI script to deploy a new version?
It looks like Plack regularly checks for some changes but I have no clue when. Moreover it seems to create multiple instances, so I sometimes get different versions of script.psgi when at /mypath. It would be helpful to manually flush perl response handler without having to restart Apache or to wait for an unknown amount of time.
The short answer is you can't. That's why we recommend you to use plackup (with -r) for quick development and use Apache only for deployment (production use).
The other option is have a development apache process, and set MaxRequestsPerChild to a really small value, so that you will get a fresh child spawned in a very short period of time. I haven't tested this, and doing so will definitely impact the performance of your entire httpd, if you run the non-development application running on the same process (which is a bad idea in the first place anyway).
Apache2::Reload (untested)
You can move your application out of the appache process,
e.g.
FastCgiExternalServer /virtual/filename/fcgi -socket /path/to/my/socket
an run your programm with
plackup -s FCGI --listen /path/to/my/socket --nproc 10 /path/to/my/script.psgi
This way you can restart your application without restarting apache.
if you save the pid of the main fcgi process (--pid $pid_file)
you can easyly restart an load your new code.
There is also a module avail to manage (start,stop, restart) all your fcgi pools:
https://metacpan.org/pod/FCGI::Engine::Manager::Server::Plackup (not tested)