three ways to let PHP and a regular user edit the same files - server

I am a web developer, and for some upcoming projects I would like to use a file-based CMS. This means that many of the files I create at the start must be editable by the PHP user later, but also remain editable for my user (and also the other way around). My PC runs Debian 9, which I love but am not super knowledgeable about, and I have also just set up a local network server with Debian 9 for backups and possibly file sharing. (I'm using Webmin to configure this, which reflects my level of command line skills).
On my online shared hosting server, the PHP user and the FTP user seem to be the same, and 644/755 permissions work fine, this is also recommended by the CMS I'm using. I would like to mimic this on my computer so I don't have to fiddle with permissions all the time. But how do I do this? Currently, my regular user (anna) does not have access to www-data's files and vice versa. Putting them in the same group still means changing file permissions. Making anna the PHP user is a Bad Idea (as far as I understand it) because anna has sudo permissions.
So far I have researched three possible solutions that I don't really know very much about, and I would like to know which is the best route to take.
Develop locally on my computer and use apache-mpm-itk or suPHP to let PHP edit the files (I got that idea from this question on ServerFault).
Develop locally on my computer and rsync the files to my server with grunt-rsync, and somehow get rsync to set the ownership to www-data (another ServerFault thread helping here).
Mount the project's server directory, which is owned by www-data, on my computer with SSHFS and then either edit the files on the server directly or copy them over from my local directory with grunt-copy.
What do you think: from a security and ease of use perspective, which is the best way? Or do you know an even better one?
Thank you for taking the time to read and think about this!
Anna~

I figured it out! I finally ended up reading about running PHP as CGI instead of as an Apache module, and that this would solve my permissions problem. Plus, as far as I understand it, there are no extra security precautions to take when I'm the only one working with it on my local computer.
In case someone comes across this who might find it helpful, here's what I did (basically following these instructions):
I installed php7.0-fpm
Edited /etc/apache2/sites-enabled/000-default.conf and put the following just before </VirtualHost>:
DirectoryIndex index.php
<LocationMatch "^(.*\.php)$">
ProxyPass fcgi://127.0.0.1:9000/var/www/html
</LocationMatch>
I activated the Apache module proxy_fcgi (via Webmin, which apparently does an automatic Apache restart)
In /etc/php/7.0/fpm/pool.d/www.conf I commented out a listen line and put another below like this:
; listen = /run/php/php7.0-fpm.sock
listen = 127.0.0.1:9000
I then restarted PHP-FPM with this command: /etc/init.d/php7.0-fpm restart (a little different from the instructions, I'm on Debian 9). After that, phpinfo() gave me the Server API "FPM/FastCGI".
And finally, I changed the user and group from www-data to anna in three places, twice in /etc/php/7.0/fpm/pool.d/www.conf and then once more in /usr/lib/tmpfiles.d/php7.0-fpm.conf (this last bit may be Ubuntu/Debian specific, my thanks go to Keith for a comment on StackExchange).
And that was it! :-)

Related

How to 'sudo' without 'newgrp'

I have two CentOS 6.8 servers running on VirtualBoxes.
On one, I can login as a regular user then use "sudo" to run administrator commands. I just add "sudo" to the front and all works as expected.
On the other, I need to first run "newgrp wheel", otherwise it nags me that I'm not in the sudoers file. Once that's done, all is well.
As far as I can tell, both systems are otherwise identical. The username in both cases has a primary group of "users" and is also a member of "wheel" and "apache" groups. The "wheel" group, of course, has been given "ALL" access via "visudo".
The only difference, if it's important, is that the first one is a VM on Linux, and I access it via Putty. The nagging one is a VM on Windows, and I access it via the VirutalBox screen.
It's not a very big issue, but I like not needing the extra step. Does anyone know what is going on here?
Well it turns out the systems were not as identical as I thought. The "visudo" sudoers file on the nagging version had somehow been restored to its original version, which meant that the "%wheel" directive was commented out. I only discovered that in trying to add a 10 minute timeout.

Server Manipulation using Linode Manager or other techniques to develop a mirror site

A quick question, does anyone know how I can take an address say www.google.com and create a mirror site such as www.newsite.google.com ?
I'm a beginner when it comes to server testing and server manipulation. I normally just drag and drop through SFTP/FTP.
Any ideas or thoughts?
Thanks, Chris
After a couple of tips from a former colleague, we sat down and talked about this issue.
First of all we mentioned APACHE VHOSTS - needed to add a directory for my sub domain, e.g. new.google.com
To do this I had to SSH the server and perform sudo, vim and a range of commmands on directories and files.
As I use linode manger, I had to add an A record to make the sub domain active/work.
All now works and I have a new sub domain.

Apache2 reload config from inside the CGI

I am working on a simple Perl app that copies another Perl app and builds all the required Apache config files.
The thing I can't seem to figure out is how to reload the apache config on the fly. I know I could do a system call and reload apache there, but that would mean I would have to get root access to this app, and that is a little scary.
Is there a way to ask apache to reload its config files from within the CGI container?
-------------------------Additional info------------------------------
I have done some more research and the problem is that Apache must be run with elevated privileges to bind to port 80. So one solution would be to set Apache to run on another port and forward that port to 80 via iptables. (This may be a last resort but a very messy solution).
Here is what gets me, Apache should be able to maintain its current port bindings and recheck its config files, all I am doing is adding another script alias.
Is there any way to add a new script alias with out a reload?
you also have the options to reload the config:
/etc/init.d/httpd reload
or
apachectl -k graceful
But unfortunately, those need root also. This differs from a normal restart in that currently open connections are not aborted. A side effect is that old log files will not be closed immediately. This means that if used in a log rotation script, a substantial delay may be necessary to ensure that the old log files are closed before processing them.
Also, if running Apache with daemontools you can do this by:
svc -h /service/apache
Sorry to ask a question then not give some one else the opportunity to answer but I figured out a solution and I hope it may help some one else.
What I had to do was leave the config alone it is not possible to reload in the manner that I required with out root privileges or some fancy port forwarding (That would make this application less portable than I would like).
So the only thing that Apache appears to load dynamically is the file system.
What I have done is used mod_rewrite to redirect the script requests and simply put them in /var/www/appname/copyname/cgi-bin/

what's the purpose of the '--delete-after' option of wget?

I came across the "--delete-after" option when I was reading the manpage of wget ?
what's the purpose of providing such an option ? Is it just for testing the page is ok for downloading ? Or maybe there are other situations where this option is useful, I hope you guys may give me some hints.
With reference to your comments above. I'm providing some examples of how we use it. We have a few websites running on Rackspace Cloud Sites which is a managed cloud hosting solution. We don't have access to regular cron.
We had an issue with runaway usage on a site using WordPress because WP kept calling wp-cron.php. To give you a sense of runaway usage, it used up in one day the allotted CPU cycles for a month. Anyway what I did was disable wp-cron.php being called within the WordPress system and manually call it through wget. I'm not interested in the output from the process so if I don't use --delete-after with wget (wget ... > /dev/null 2>&1 works well too) the folder where wget runs would get filled with hundreds of useless logs and output of each time the script was called.
We also have SugarCRM installed and that system requires its cron script to be called to handle system maintenance. We use wget silently for that as well. Basically a lot of these kinds of web-based systems have cron scripts. If you can't call your scripts directly say using php on the machine then the other option is calling it silently with wget.
The command to call these cron scripts is quite basic - wget --delete-after http://example.com/cron.php?parameters=if+needed
I'm using wget (with cron) to automate commands to a web application, so I have no interest in the contents of the pages. --delete-after is ideal for this.
You can use it for testing if a page is downloading ok, but usually it's used to force proxy servers to cache their contents.
If your sitting on a connection where there's a network appliance caching content between the site and your endpoint, and you have a site that's popular among users on that network, then what you may want to do as a sysadmin, is to use a down level machine just after the proxy to script a recursive "-r" or mirror "-m" wget operation.
The proxy appliance will see this and pre-cache the site and it's assets, thus making site accesses for uses after said proxy a bit faster.
You'd then want to specify "--delete-after" to free up the disk space used unless your wanting to keep a local copy of all sites you force to cache.
Sometimes you only need to visit a website to set an IP address - say if you are rolling your own dyn dns service.

How to configure MAMP to serve perl CGI scripts (NOT localhost!)

I'm using MAMP-pro to serve my domain to the outside world.
I'm not a very experienced sys-admin, though I've slogged my way through a few basic things. I know what apache is, and I can read-most-of but not generate-without-guide related .conf files.
I've got a perl script which I've tested from the command line and it works (outputs as desired.)
When I try to access said script from the browser, I get 404.
I've tried placing the script at:
/Users/me/Sites/mydomain.com/htdocs/mycgi.pl
/Users/me/Sites/mydomain.com/cgi-bin/mycgi.pl
/Users/me/Sites/mydomain.com/htdocs/cgi-bin/mycgi.pl
and accessing it as:
http://www.mydomain.com/mycgi.pl
http://www.mydomain.com/cgi-bin/mycgi.pl
and all the various combinations, all to no avail (404.)
The script and its container directory have permissions 755.
So, what other steps am I missing? Are there any good set-up guides? I tried the MAMP-Pro manual, but it is filled with such information as "the cancel button cancels the current operation" and not really anything useful. Google turned up several hits that all seem to talk about how to make this work on localhost, but I'm trying to serve this to the outside world.
Any hints?
Thanks!
The official online documentation has a section on virtual hosts. When creating a host for www.mydomain.com you can choose the DocumentRoot which is called "Disk location" within MAMP PRO. If you still get a 404 error, take a look into the error_log for a more specific reason (i.e., where Apache tries to find the file in question).