Why doesn't my Perl script start under FastCGI? - perl

I am getting this error in error_log one of my Perl CGI application. I am pretty sure I haven't changed my script at all and all of a sudden i have started getting this error.
This is what I see in error_log:
[Wed Jul 8 15:18:20 2009] [warn] FastCGI: server "/local/web/test/cgi-bin/test.pl" (pid 17033)
terminated by calling exit with status '255'
[Wed Jul 8 15:18:20 2009] [warn] FastCGI: server "/local/web/test/cgi-bin/test.pl"
has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds
(The snippet was edited for clarity)
Also, the AddHandler line for FastCGI exists in the config file.
Can any tell me the possible reasons for this error?
There is nothing recorded in Apache logs.

Here are two tips that might help (assuming your app is respecting the fastcgi protocol):
1. try to run the app in command line, this prove that you have execute bit and no compile errors in code.
2. check suexec.log of your apache server, this might show user/group or other errors related to your script.

You could try redirecting the STDERR from your Perl script, something like:
BEGIN { open STDERR, '>stderr.log' }
If your stderr.log file doesn't get created at all, it means that the script didn't even get executed, likely a suexec/permissions problem. Otherwise, you should have whatever went wrong with the Perl script in that file.

Related

Perl modules not installing. sh: 1: gzip: Exec format error

I am getting issues when trying to install Perl modules on Ubuntu running on Windows 10.
I am not sure the best way to post the output, I am guessing maybe as "code"
$ cpan HTTP::Tiny
Loading internal logger. Log::Log4perl recommended for better logging
Reading '/home/user/.cpan/Metadata'
Database was generated on Sun, 06 Nov 2022 23:29:01 GMT
Running install for module 'HTTP::Tiny'
Fetching with HTTP::Tiny:
http://www.cpan.org/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz
Fetching with HTTP::Tiny:
http://www.cpan.org/authors/id/D/DA/DAGOLDEN/CHECKSUMS
Checksum for /home/user/.cpan/sources/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz ok
sh: 1: /usr/bin/gzip: Exec format error
/usr/bin/tar: This does not look like a tar archive
/usr/bin/tar: Exiting with failure status due to previous errors
Uncompressed /home/user/.cpan/sources/authors/id/D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz successfully
Using Tar:/usr/bin/tar xf "HTTP-Tiny-0.082.tar":
Untarred HTTP-Tiny-0.082.tar successfully
'YAML' not installed, will not store persistent state
Package contains both files[HTTP-Tiny-0.082.tar] and directories[HTTP-Tiny-0.082]; not recognized as a perl package, giving up
Configuring D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz with Makefile.PL
Running make for D/DA/DAGOLDEN/HTTP-Tiny-0.082.tar.gz
make: *** No targets specified and no makefile found. Stop.
DAGOLDEN/HTTP-Tiny-0.082.tar.gz
/usr/bin/make -- NOT OK
I am really suck on how to fix this, I have been scratching my head for a few days now and any assistance would be really appreciated.
Based on the log it seems to be an issue specifically with Ubuntu 22.04 on WSL1:
sh: 1: gzip: Exec format error
see: gzip from Ubuntu Jammy doesn't execute #8219
You could install an earlier version of gzip via. apt that works well – however the best route would be to update to WSL 2, as it has many benefits and it won't have this issue.
See the comparison page including some rare cases where you might want to stay on WSL 1.

Program stalls on "DEBUG Starting new HTTPS connection (1): login.microsoftonline.com:443" only when executed via cron

I'm using the python-O365 library in order to access my O365 mailbox. The project requires me to execute the program in a docker container. If I start the program manually (as root), everything works fine, but if I try to start it via cron, it stalls on DEBUG Starting new HTTPS connection (1): login.microsoftonline.com:443, which I found out after activating logging.
The minimal code example that reproduces the error (with log):
import O365
from utils.credentials import get_credentials
import logging # We want to get additional information
logging.basicConfig(
filename='./easy_log_file.log',
filemode='a',
format='%(levelname)s %(message)s', # %(asctime)s %(pathname)s %(lineno)d
level=logging.DEBUG
)
filename = "o365_token.txt"
token_backend = O365.FileSystemTokenBackend(token_path = filename)
account = O365.Account(get_credentials(), token_backend=token_backend)
inbox = account.mailbox().inbox_folder()
messages = inbox.get_messages()
for message in messages:
logging.info(message)
logging.info("finished")
To start it via cron, I used the following command:
echo "15 21 * * * bash /workspace/daemon_start.sh >> /workspace/cronlogs/logs_daemon_mail.log" | crontab. If I start the program manually, the log continues like this:
DEBUG Starting new HTTPS connection (1): graph.microsoft.com:443
DEBUG https://graph.microsoft.com:443 "GET /v1.0/me/mailFolders/Inbox/messages?%24top=100&%24filter=isRead+eq+false HTTP/1.1" 200 None
DEBUG Received response (200) from URL https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/messages?%24top=100&%24filter=isRead+eq+false
If the program is started via cron, sometimes the log continues like this:
DEBUG Incremented Retry for (url='/common/oauth2/v2.0/token'): Retry(total=2, connect=3, read=3, redirect=None, status=None)
WARNING Retrying (Retry(total=2, connect=3, read=3, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /common/oauth2/v2.0/token
In order to resolve the issue, I added my proxy by using account = Account(credentials, token_backend=token_backend, proxy_server="proxy.my_proxy.com"). It's strange, that I would have to add it, for the container is already configured to use this proxy. When I tried it with this setting, I encountered the same issue, only that the log when started with cron was continued always and much faster.
Since I think, that cron simply starts the program and does not meddle with the connections, it doesn't make sense to me, that I get different outcomes by starting it manually or with cron.

NFS mount points are going off/NFS compound failed for server mashost

We have an application in solaris during specific test case we will generate heap dump which will be written in to the server at specific path during this case we are getting following error in trace file
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /ossrc/upgrade/JREheapdumps/java_pid16092.hprof ...
Dump file is incomplete: I/O error
and in /var/adm/messages we could see
Oct 28 13:00:10 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /ossrc/upgrade]NFS server mashost not
responding; still trying
Oct 28 13:02:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /usr/local]NFS server mashost not
responding; still trying
Oct 28 13:04:53 ossuas2 nfs: [ID 733954 kern.info] NOTICE: [NFS4][Server: mashost][Mntpt: /etc/opt/ericsson]NFS server mashost not
responding; still trying
Can anyone please help here why we are getting this problem and can any tell us can an application cause this impact on mashost ..????
First things first, check out the NFS service w/ svcbundle and svcs -- when it crashes, run:
# svcs -x nfs/client
on the client, and
# svcs -x nfs/server
on the server. I would expect one or both to be in a "maintenance" state. (You may see it fails to start properly at all). If it is in a maintenance mode, you should see a row marked "Reason:" that says why.
You might see "offline" -- in that case, startd will attempt to reboot the service multiple times and, if it fails after five attempts or hangs indefinitely, places it into "maintenance" state and stops restarting.
Check the logs in
/var/svc/log/<service-name FMRI>.log
There will be one on your client machine under "network-nfs-client:default" (probably, may have a name other than 'default' if it's been changed manually), and one on the server under "network-nfs-server:default"
See what you can glean from those.
svcbundle is all the time taking snapshots as backups of services, so you can try reverting to one of those.
# svcs -s nfs/server:default
svc:/network/nfs/server:default> listsnap
svc:/network/nfs/server:default> revert start [name_of_snapshot]
svc:/network/nfs/server:default> quit
# svcadm refresh nfs/server:default
# svcadm restart nfs/server:default
Make sure to include the ":default" tag, or if you saw a different tag from "svcs nfs/server" include it, that name defines an instance of the service, every running service is an instance.
If the process is failing to boot, you might have to look at the XML manifest under /lib/svc/manifest/network/nfs/ -- inside, you'll see dependencies (and services dependent on this one), then "exec_method"s, which define how the service starts, stops and restarts.
Instead of snapshots, you can can also restore it to default: use svccfg -s <FMRI> delete to clear it, then svcadm refresh <FMRI> and svcadm enable <FMRI>.
If the service is in maintenance state, once you've isolated and fixed the problem, you can manually clear that state by running svcadm clear <FMRI>.

Exceptions in MongoDB/Cursor.pm line 161

Using MongoDB 2.4.5 with version 0.702 of the Perl MongoDB driver, I frequently run into exceptions like these:
recv timed out (30000 ms) at ...MongoDB/Cursor.pm line 161.
couldn't get response to throw out at ...MongoDB/Cursor.pm line 161.
missed the response we wanted, please try again at ...MongoDB/Cursor.pm line 161
invalid header received at ...MongoDB/Cursor.pm line 161.
can't get db response, not connected at ...MongoDB/Cursor.pm line 161.
The exceptions are intermittent, and often vanish on the next request (this is a web app). Occasionally, the exceptions will persist over several consecutive requests.
This is a tiny database running the default configuration (no sharding or anything fancy). I've tried using some of the tools listed here and here, but I'm unclear on how to apply them to this situation.
This is all running on Debian 7.1 64-bit. The web server is Mojolicious' hypnotoad 4.07 on perl 5.16.3 running behind apache2.
Can you kindly suggest some tools & strategies for diagnosing the problem? Thanks for your time.

Issues with running two instances of searchd

I have just updated our Sphinx server from 1.10-beta to 2.0.6-release, and now I have run into some issues with searchd. Previously we were able to run two instances of searchd next to each other by specifying two different config-files, i.e:
searchd --config /etc/sphinx/sphinx.conf
searchd --config /etc/sphinx/sphinx.staging.conf
sphinx.conf listens to 9306:mysql41, and 9312, while sphinx.staging.conf listens to 9307:mysql41 and 9313.
After we updated to 2.0.6 however, a second instance is never started. Or rather.. the output makes it seem like it starts, and a pid-file is created etc. But for some reason only the first searchd instance keeps running, and the second seems to shutdown right away. So while trying to run searchd --config /etc/sphinx/sphinx.conf twice (if that was the first one started) complains that the pid-file is in use, trying to run searchd --config /etc/sphinx/sphinx.staging.conf (if that is the second started instance) "starts" the daemon again and again, only no new process is created..
Note that if I switch these commands around when first creating the process, then sphinx.conf is the instance not really started.
I have checked, and rechecked, that these ports are only used by searchd.
Does anyone have any idea of what I can do/try next? I've installed it from source on ubuntu 10.04 LTS with:
./configure --prefix /etc/sphinx --with-mysql --enable-id64 --with-libstemmer
make -j4 install
Note to self: Check the logs!
RT-indices use binary logs to enable crash recovery. Since my old config files did not specify a path for where these should be stored, both instances of searchd tried to write to the same binary logs. The instance started last was of course not permitted to manipulate these files, and thus exited with a fatal error:
[Fri Nov 2 17:13:32.262 2012] [ 5346] FATAL: failed to lock
'/etc/sphinx/var/data/binlog.lock': 11 'Resource temporarily unavailable'
[Fri Nov 2 17:13:32.264 2012] [ 5345] Child process 5346 has been finished,
exit code 1. Watchdog finishes also. Good bye!
The solution was simple, ensure to specify a binlog_path inside the searchd configuration section of each configuration file:
searchd
{
[...]
binlog_path = /path/to/writable/directory
[...]
}