I have PHP running via Fastcgi on Apache2. The PHP process uses a Unix Socket.
Would it be possible to access the socket from the command line and to execute a PHP script?
I have some long running operations that can take hours to execute, so going via the webserver is not an option. On the other hand, calling PHP from the command line directly is not optimal, because the CLI process cannot access the shared caches of the fastcgi php process.
I tried the socket command, but I do notreally know what to do.
I finally got it working with cgi-fcgi.
Related
I am trying to get FastCGI to work with Perl CGI in FastCGI mode. By that I mean that I can run Perl CGI scripts on Nginix without any issues but it does not appear (based on performance measurements) that my scripts are running in FastCGI mode. I've seen mentions about spawn-fcgi but nothing recent or details on how to implement with current versions of Nginx and Ubuntu.
I am relatively new to Linux.
Environment info:
Ubuntu 22.04 hosted in the "cloud"
nginx version: nginx/1.18.0 (Ubuntu)
Tutorial I followed: https://techexpert.tips/nginx/perl-cgi-nginx/
I have installed fcgiwrap.
CGI works, HTTPS works.
I am enclosing the bulk of my Perl CGI script within
use CGI::Fast;
while(my $q = new CGI::Fast)
{
# all my code
}
My script is doing a simple write and query from a very small SQLite3 database (4 columns, 500 rows, 290k db size).
I am testing across the Internet.
Script execution performance is about half of a second (.42 seconds per page) or 345ms. That is about the same as IIS Windows CGI.
I also saw that requesting a simple HTML file only has marginally better performance.
I understand that there is a lot more to performance testing than the preceding. The point is that I think there is something missing with my FastCGI configuration. I imagine it has something to do with me needing to launch the Perl FastCGI script somehow and then somehow connect that to Nginx. If anyone can point me in the right direction or provide example config files, that would be great. Thanks all!
I am creating a Perl script that creates a Net::WebSocket::Server on port 3000. Now I had the (not so brilliant) idea to start the script in the browser via CGI, so it runs in the background and can't be stopped. However, I have to restart the script whenever I modify it.
Is it possible to stop a CGI script in an endless loop, except by restarting the computer?
You didn't say what operating system you are on, so we cannot give you specific advice on how to find and kill the process. But you can always restart the web server application. CGI scripts are children of the server process (probably an Apache) that starts them. If you simply restart the Apache server, they should all be terminated.
Please don't put code that is supposed to run persistently in your cgi-bin directory. That's a bad idea, as you discovered.
I'm running lighttpd as a daemon with fastcgi and web.py on CentOS using:
service lighttpd start
which works. My site loads. But now the output from web.py (i.e. any exceptions, a log of requests, etc.) is nowhere to be found. Where does stdout go?
I've looked in /var/log/lighttpd/ at access.log and error.log, and neither holds the output from web.py.
AFAIK stdout from fastcgi processes in lighttpd is simply ignored.
If you want get stderr output from fastcgi process you can use server.breakagelog option from http://redmine.lighttpd.net/projects/1/wiki/docs_modcgi:
server.breakagelog = "/var/log/lighttpd/breakage.log"
But this is raw stderr output without event time or source.
I suggest to use web.py or Python logging facilities and log to file.
I'm inheriting a file transfer environment with a collection of scripts written in Perl running on Linux. In a nutshell, these scripts just transfer files between sites using SFTP and SMB/CIFS protocols.
I've noticed that the scripts use Net::SFTP::Foreign for the SFTP connection handling.
Are there any advantages to using Perl modules to accomplish connections and transfers as opposed to just calling an external commands like lftp or smbclient?
You usually get better error detection and reporting using a module. I can't think of any good reason to change already working code to use an external command instead.
I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.